Who Should (or Could) Protect Our Data?

Last week, when I talked about the current furor around the Cambridge Analytica scandal, I said that part of the blame – or at least, the responsibility – for the protection of our own data belonged to us. Reader Chuck Lantz responded with:

“In short, just because a company such as FaceBook can do something doesn’t mean they should.  We trusted FaceBook and they took advantage of that trust. Not being more careful with our own personal info, while not very wise, is not a crime. And attempting to dole out blame to both victim and perpetrator ain’t exactly wise, either.”

Whether it’s wise or not, when it comes to our own data, there are only three places we can reasonably look to protect it:

A) The Government

One only has to look at the supposed “grilling” of Zuckerberg by Congress to realize how forlorn a hope this is. In a follow up post, Wharton ran a list of the questions that Congress should have asked, compiled from their own faculty. My personal favorite comes from Eric Clemons, professor of Operations, Information and Decisions:

“You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?”

Technology has left the regulatory ability to control it in the dust. The EU is probably the most aggressive legislative jurisdiction in the world when it comes to protecting data privacy. The General Data Protection Regulation goes into place on May 25 of this year and incorporates sweeping new protections for EU citizens. But it will inevitably come up short in three key areas:

  • Even though it immediately applies to all countries processing the data of EU citizens, international compliance will be difficult to enforce consistently, especially if that processing extends beyond “friendly” countries.
  • Technological “loopholes” will quickly find vulnerable gray areas in the legislation that will lead to the misuse of data. Technology will always move faster than legislation. As an example, the GDPR and blockchain technologies are seemingly on a collision course.
  • Most importantly, the GDPR regulation is aimed at data “worst case scenarios.” But there are many apparently benign applications that can border on misuse of personal data. In trying to police even the worst-case instances, the GDPR requires restrictions that will directly impact users in the area of convenience and functionality. There are key areas such as data portability that aren’t fully addressed in the new legislation. At the end of the day, even though it’s protecting them, users will find the GDPR a pain in the ass.

Even with these fundamental flaws, the GDPR probably represents the world’s best attempt at data regulation. The US, as we’ve seen in the past week, comes up well short of this. And even if the people involved weren’t doddering old technologically inept farts the mechanisms required for the passing of relevant and timely legislation simply aren’t there. It would be like trying to catch a jet with a lasso. Should this be the job of government? Sure, I can buy that. Can government handle the job? Not based on the evidence we currently have available to us.

B) The companies that aggregate and manipulate our data.

Philosophically, I completely agree with Chuck. Like I said last week – the point of view I took left me ill at ease. We need these companies to be better than they are. We certainly need them to be better than Facebook was. But Facebook has absolutely no incentive to be better. And my fellow Media Insider, Kaila Colbin, nailed this in her column last week:

“Facebook doesn’t benefit if you feel better about yourself, or if you’re a more informed, thoughtful person. It benefits if you spend more time on its site, and buy more stuff. Giving the users control over who sees their posts offers the illusion of individual agency while protecting the prime directive.”

There are no inherent, proximate reasons for companies to be moral. They are built to be profitable (which, by the way, is why governments should never be run like a company). Facebook’s revenue model is directly opposed to personal protection of data. And that is why Facebook will try to weather this storm by implementing more self-directed security controls to put a good face on things. We will ignore those controls, because it’s a pain in the ass to do otherwise. And this scenario will continue to play out again and again.

C) Ourselves.

It sucks that we have to take this into our own hands. But I don’t see an option. Unless you see something in the first two alternatives that I don’t see, I don’t think we have any choice but to take responsibility. Do you want to put your security in the hands of the government, or Facebook? The first doesn’t have the horsepower to do the job and the second is heading in the wrong direction.

So if the responsibility ends up being ours, what can we expect?

A few weeks ago, another fellow Insider, Dave Morgan, predicted the moats around the walled gardens of data collectors like Facebook will get deeper. But the walled garden approach is not sustainable in the long run. All the market forces are going against it. As markets mature, they move from siloes to open markets. The marketplace of data will head in the same direction. Protectionist measures may be implemented in the short term, but they will not be successful.

This doesn’t negate the fact that the protection of personal information has suddenly become a massive pain point, which makes it huge market opportunity. And like almost all truly meaningful disruptions in the marketplace, I believe the ability to lock down our own data will come from entrepreneurialism. We need a solution that guarantees universal data portability while at the same time maintaining control without putting an unrealistic maintenance burden on us. Rather than having the various walled gardens warehouse our data, we should retain ownership and it should only be offered to platforms like Facebook on a case-by-case “need to know” transactional basis. Will it be disruptive to the current social eco-system? Absolutely. And that’s a good thing.

The targeting of advertising is not a viable business model for the intertwined worlds of social connection and personal functionality. There is just too much at stake here. The only way it can work is for the organization doing the targeting to retain ownership of the data used for the targeting. And we should not trust them to do so in an ethical manner. Their profitability depends on them going beyond what is – or should be – acceptable to us.

What the Hell is “Time Spent” with Advertising Anyway?

Over at MediaPost’s Research Intelligencer, Joe Mandese is running a series of columns that are digging into a couple of questions:

  • How much time are consumers spending with advertising; and,
  • How much is that time worth.

The quick answers are 1.84 hours daily and about $3.40 per hour.

Although Joe readily admits that these are ‘back of the envelope” calculations, regular Mediapost reader and commentator Ed Papazian points out a gaping hole in the logic of these questions: an hour of being exposed to ads does not equal an hour spent with those ads and it certainly doesn’t mean an hour being aware of the ads.

Ignoring this fundamental glitch is symptomatic of the conceit of the advertising business in general. They believe there is a value exchange possible where paying consumers to watch advertising is related to the effectiveness of that advertising. The oversimplification required to rationalize this exchange is staggering. It essentially ignores the fields of cognitive psychology and neuroscience. It assumes that audience attention is a simple door that can be opened if only the price is right.

It just isn’t that simple.

Let’s go back to the concept of time spent with media. There are many studies done that quantify this. But the simple truth is that media is too big a catchall category to make this quantification meaningful. We’re not even attempting to compare apples and oranges. We’re comparing an apple, a jigsaw and a meteor. The cognitive variations alone in how we consume media are immense.

And while I’m on a rant, let’s nuke the term “consumption” all together, shall we? It’s probably the most misleading word ever coined to define our relationship with media. We don’t consume media any more than we consume our physical environment. It is an informational context within which we function. We interact with aspects of it with varying degrees of intention. Trying to measure all these interactions with a single yardstick is the same as trying to measure our physical interactions with water, oxygen, gravity and an apple tree by the same criterion.

Even trying to dig into this question has a major methodological flaw – we almost never think about advertising. It is usually forced on our consciousness. So to use a research tools like a survey – requiring respondents to actively consider their response – to explore our subconscious relationship with advertising is like using a banana to drive a nail. It’s the wrong tool for the job. It’s the same as me asking you how much you would pay per hour to have access to gravity.

This current fervor all comes from a prediction from Publicis Groupe Chief Growth Officer Rishad Tobaccowala that the supply of consumer attention would erode by 20% to 30% in the next five years. Tobaccowala – by putting a number to attention – led to the mistaken belief that it’s something that could be managed by the industry. The attention of your audience isn’t slipping away because advertising and media buying was mismanaged. It’s slipping away because your audience now has choices, and some of those choices don’t include advertising. Let’s just admit the obvious. People don’t want advertising. We only put up with advertising when we have no choice.

“But wait,” the ad industry is quick to protest, “In surveys people say they are willing to have ads in return for free access to media. In fact, almost 80% of respondents in a recent survey said that they prefer the ad-supported model!”

Again, we have the methodological fly in the ointment. We’re asking people to stop and think about something they never stop and think about. You’re not going to get the right answer. A better answer would be to think about what happens when you get the pop up when you go to a news site with your ad-blocker on. “Hey,” it says, “We notice you’re using an ad-blocker.” If you have the option of turning the ad-blocker off to see the article or just clicking a link that let’s you see it anyway, which are you going to choose? That’s what I thought. And you’re probably in the ad business. It pays your mortgage.

Look, I get that the ad business is in crisis. And I also understand why the industry is motivated to find an answer. But the complexity of the issue in front of us is staggering and no one is served well by oversimplifying it down to putting a price tag on our attention. We have to understand that we’re in an industry where – given the choice – people would rather not have anything to do with us. Unless we do that, we’ll just be making the same mistakes over and over again.

 

 

The Rain in Spain

Olá! Greetings from the soggy Iberian Peninsula. I’ve been in Spain and Portugal for the last three weeks, which has included – count them – 21 days of rain and gale force winds. Weather aside, it’s been amazing. I have spent very little of that time thinking about online media. But, for what they’re worth, here are some random observations from the last three weeks:

The Importance of Familiarity

While here, I’ve been reading Derek Thompson’s book Hitmakers. One of the critical components of a hit is a foundation of familiarity. Once this is in place, a hit provides just enough novelty to tantalize us. It’s why Hollywood studios seem stuck on the superhero sequel cycle.

This was driven home to me as I travelled. I’m a do-it-yourself traveller. I avoid packaged vacations whenever and wherever possible. But there is a price to be paid for this. Every time we buy groceries, take a drive, catch a train, fill up with gas or drive through a tollbooth (especially in Portugal) there is a never-ending series of puzzles to be solved. The fact that I know no Portuguese and very little Spanish makes this even more challenging. I’m always up for a good challenge, but I have to tell you, at the end of three weeks, I’m mentally exhausted. I’ve had more than enough novelty and I’m craving some more familiarity.

This has made me rethink the entire concept of familiarity. Our grooves make us comfortable. They’re the foundations that make us secure enough to explore. It’s no coincidence that the words “family” and “familiar” come from the same etymological root.

The Opposite of Agile Development

seville-catheral-altarWhile in Seville, we visited the cathedral there. The main altarpiece, which is the largest and one of the finest in the world, was the life’s work of one man, Pierre Dancart. He worked on it for 44 years of his life and never saw the finished product. In total, it took over 80 years to complete.

Think about that for a moment. This man worked on this one piece of art for his entire life. There was no morning where he woke up and wondered, “Hmm, what am I going to do today?” This was it, from the time he was barely more than a teenager until he was an old man. And he still never got to see the completed work. That span of time is amazing to me. If built and finished today, it would have been started in 1936.

The Ubiquitous Screen

I love my smartphone. It has saved my ass more than once on this trip. But I was saddened to see that our preoccupation with being connected has spread into every nook and cranny of European culture. Last night, we went for dinner at a lovely little tapas bar in Lisbon. It was achingly romantic. There was a young German couple next to us who may or may not have been in love. It was difficult to tell, because they spent most of the evening staring at their phones rather than at each other.

I have realized that the word “screen” has many meanings, one of which is a “barrier meant to hide things or divide us.”

El Gordo

Finally, after giving my name in a few places and getting mysterious grins in return, I have realized that “gordo” means “fat” in Spanish and Portuguese.

Make of that what you will.

WTF Tech

Do you need a Kuvée?

Wait. Don’t answer yet. Let me first tell you what a Kuvée is: It’s a $178 wine bottle that connects to Wi-Fi.

Okay..let’s try again. Do you need a Kuvée?

Don’t bother answering. You don’t need a Kuvée. No one needs a Kuvée. The earth has 7.2 billion people on it. Not one of them needs a Kuvée. That’s probably why the company is packing up their high tech bottles and calling it a day. A Kuvée is an example of WTF Tech. Hold that thought, because we’ll get back to that in a minute.

So, we’ve established that you don’t need a Kuvée. “But that’s not the point,” you might say. “It’s not whether I need a Kuvée. It’s whether I want a Kuvée.” Fair point. In our world of ostentatious consumerism, it’s not really about need – it’s about desire. And Lord knows many of the most pretentious and entitled assholes in the world are wine snobs.

But I have to believe that, buried deep in our lizard brain; there is still a tenuous link between wanting something and needing something. Drench it as we might in the best wine technology can serve, there still might be spark of practicality glowing in the gathering dark of our souls. But like I said, I know some real dickhead wine drinkers. So, who knows? Maybe Kuvée was just ahead of the curve.

And that brings us back to WTF tech. This defines the application of tech to a problem that doesn’t exist simply because it’s tech. There is no practical reason why this tech ever needs to exist. Besides the Kuvée, here are some other examples of WTF tech:

The Kérastase Hair Coach

withings-loreal-hair-coach-3-1This is a hairbrush with an Internet connection. Seriously. It has a microphone that “listens” while you brush your hear, as well as an accelerometer, gyroscope and other sensors. It’s supposed to save you from bruising your hair while you’re brushing it. It retails for “under $200.”

 

The Hushme Mask

hushme-voice-masking-470x310@2xThis tech actually does solve a problem, but in a really stupid way. The problem is obnoxious jerks that insist on carrying on their phone conversation at the top of their lungs while sitting next to you. That’s a real problem, right? But here’s the stupid part. In order for this thing to work, you have to convince the guilty party to wear this Hannibal Lector-like mask while they’re on the phone. Go ahead, buy one for $189 and give it a shot next time you run into a really loud tele-jerk. Let me know how it works out for you.

Denso Vacuum Shoes

denso-vacuum-shoe-ces-2017-03“These boots are made for sucking…and that’s just what they’ll do.”

Finally, an invention that lets you shoe-ver your carpet. That’s right, the Japanese company Denso is working on a prototype of a shoe that vacuums as you walk, storing the dirt in a tiny box in the shoe’s sole. As a special bonus, they look just like a pair of circa 1975 Elton John Pinball Wizard boots.

When You’re a Hammer…

We live in a “tech for tech’s sake” time. When all the world is a hi-tech hammer, everything begins to look like a low-tech nail. Each of these questionable gadgets had investors who believed in them. Both the Kuvée and the Hushme had successful crowd-funding campaigns. The Hair Coach and the Vacuum Shoes have corporate backing. The dot-com bubble of 2000-2002 has just morphed into a bunch of broader based but no less ephemeral bubbles.

Let me wrap up with a story. Some years ago, I was speaking at a conference and my panel was the last one of the day. After it wrapped, the moderator, a few of the other panelists and I decided to go out for dinner. One of my co-panelists suggested a restaurant he had done some programming work for. When we got there, he showed us his brainchild. With much pomp and ceremony, our waiter delivered an iPad to the table. Our co-panelist took it and showed us how his company had set up the wine list as an app. Theoretically, you could scroll through descriptions and see what the suggested pairings were. I say theoretically, because none of that happened on this particular night.

Our moderator watched silently as the demonstration struggled through a series of glitches. Finally, he could stay silent no longer. “You know what else works, Dave? A sommelier. When I’m paying this much for a dinner, I want to talk to a f*$@ng human.”

Sometimes, there’s just not an app for that.

Sorry, I Don’t Speak Complexity

I was reading about an interesting study from Cornell this week. Dr. Morton Christianson, Co-Director of Cornell’s Cognitive Science Program, and his colleagues explored an interesting linguistic paradox – languages that a lot of people speak – like English and Mandarin – have large vocabularies but relatively simple grammar. Languages that are smaller and more localized have fewer words but more complex grammatical rules.

The reason, Christensen found, has to do with the ease of learning. It doesn’t take much to learn a new word. A couple of exposures and you’ve assimilated it. Because of this, new words become memes that tend to propagate quickly through the population. But the foundations of grammar are much more difficult to understand and learn. It takes repeated exposures and an application of effort to learn them.

Language is a shared cultural component that depends on the structure of a network. We get an inside view of network dynamics from investigating the spread of language. Let’s look at the complexity of a syntactic rule, for example. These are the rules that govern sentence structure, word order and punctuation. In terms of learnability, syntax offers much more complexity than simply understanding the definition of a word. In order to learn syntax, you need repeated exposures to it. And this is where the structure and scope of a network comes in. As Dr. Christensen explains,

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

This research seems to indicate that cultural complexity is first spawned in heavily interlinked and relatively intimate network nodes. For these memes – whether they be language, art, philosophies or ideologies – to bridge to and spread through the greater network, they are often simplified so they’re easier to assimilate.

If this is true, then we have to consider what might happen as our world becomes more interconnected. Will there be a collective “dumbing down” of culture? If current events are any indication, that certainly seems to be the case. The memes with the highest potential to spread are absurdly simple. No effort on the part of the receiver is required to understand them.

But there is a counterpoint to this that does hold out some hope. As Christensen reminds us, “People can self-organize into smaller communities to counteract that drive toward simplification.” From this emerges an interesting yin and yang of cultural content creation. You have more highly connected nodes independent of geography that are producing some truly complex content. But, because of the high threshold of assimilation required, the complexity becomes trapped in that node. The only things that escape are fragments of that content that can be simplified to the point where they can go viral through the greater network. But to do so, they have to be stripped of their context.

This is exactly what caused the language paradox that the team explored. If you have a wide network – or a large population of speakers – there are a greater number of nodes producing new content. In this instance, the words are the fragments, which can be assimilated, and the grammar is the context that gets left behind.

There is another aspect of this to consider. Because of these dynamics unique to a large and highly connected network, the simple and trivial naturally rises to the top. Complexity gets trapped beneath the surface, imprisoned in isolated nodes within the network. But this doesn’t mean complexity goes away – it just fragments and becomes more specific to the node in which it originated. The network loses a common understanding and definition of that complexity. We lose our shared ideological touchstones, which are by necessity more complex.

If we speculate on where this might go in the future, it’s not unreasonable to expect to see an increase in tribalism in matters related to any type of complexity – like religion or politics – and a continuing expansion of simple cultural memes.

The only time we may truly come together as a society is to share a video of a cat playing basketball.

 

 

Thinking Beyond the Brand

Apparently boring is the new gold standard of branding, at least when it comes to ranking countries on the international stage. According to a new report from US News, the Wharton School and Y&R’s BAV Group, Canada is the No. 2 country in the world. That’s right – Canada – the country that Robin Williams called “a really nice apartment over a meth lab.”

The methodology here is interesting. It was basically a brand benchmarking study. That’s what BAV does. They’re the “world’s largest and leading empirical study of brands” And Canada’s brand is: safe, slightly left leaning, polite, predictable and – yes – boring. Oh – and we have lakes and mountains.

Who, you may ask, beat us? Switzerland – a country that is safe, slightly left leading, polite, predictable and – yes – boring. Oh – and they have lakes and mountains too.

This study has managed to reduce entire countries to a type of cognitive short hand we call a brand. As a Canadian, I can tell you this country contains multitudes – some good, some bad – and remarkably little of it is boring. We’re like an iceberg (literally, in some months) – there’s a lot that lies under the surface. But as far as the world cares, you already know everything you need to know about Canada and no further learning is required.

That’s the problem with branding. We rely more and more on whatever brand perceptions we already have in place without thinking too much about whether they’re based on valid knowledge. We certainly don’t go out of our way to challenge those perceptions. What was originally intended to sell dish soap is being used as a cognitive short cut for everything we do. We rely on branding – instant know-ability – or what I called labelability in a previous column. We spend more and more of our time knowing and less and less of it learning.

Branding is a mental rot that is reducing everything to a broadly sketched caricature.

Take politics for example. That same BAV group turned their branding spotlight on candidates for the next presidential election. Y&R CEO David Sable explored just how important branding will be in 2020. Spoiler alert: it will be huge.

When BAV looked at the brands of various candidates, Trump continues to dominate. This was true in 2016, and depending on the variables of fate currently in play, it could be true in 2020 as well. “We showed how fresh and powerful President Trump was as a brand, and just how tired and weak Hillary was… despite having more esteem and stature.”

Sable prefaced his exploration with this warning: “What follows is not a political screed, endorsement or advocacy of any sort. It is more a questioning of ourselves, with some data thrown to add to the interrogative.” In other words, he’s saying that this is not really based on any type of rational foundation; it’s simply evaluating what people believe. And I find that particular mental decoupling to be troubling.

This idea of cognitive shorthand is increasingly prevalent in an attention deficit world. Everything is being reduced to a brand. The problem with this is that once that brand has been “branded” it’s very difficult to shake. Our world is being boiled down to branding and target marketing. Our brains have effectively become pigeon holed. That’s why Trump was right when he said, “I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn’t lose any voters”

We have a dangerous spiral developing. In a world with an escalating amount of information, we increasingly rely on brands/beliefs for our rationalization of the world. When we do expose ourselves to information, we rely on information that reinforces those brands and beliefs. Barack Obama identified this in a recent interview with David Letterman: “One of the biggest challenges we have to our democracy is the degree to which we don’t share a common baseline of facts. We are operating in completely different information universes. If you watch Fox News, you are living on a different planet than you are if you listen to NPR.”

Our information sources have to be “on-brand”. And those sources are filtered by algorithms shaped by our current beliefs. As our bubble solidifies, there is nary a crack left for a fresh perspective to sneak in.

 

The Decentralization of Trust

Forget Bitcoin. It’s a symptom. Forget even Blockchain. It’s big – but it’s technology. That makes it a tool. Which means it’s used at our will. And that will is the real story. Our will is always the real story – why do we build the tools we do? What is revolutionary is that we’ve finally found a way to decentralize trust. That runs against the very nature of how we’ve defined trust for centuries.

And that’s the big deal.

Trust began by being very intimate – ruled by our instincts in a face-to-face context. But for the last thousand years, our history has been all about concentration and the mass of everything – including whom we trust. We have consolidated our defense, our government, our commerce and our culture. In doing so, we have also consolidated our trust in a few all-powerful institutions.

But the past 20 years have been all about decentralization and tearing down power structures, as we invent new technologies to let us do that. In that vien, Blockchain is a doozy. It will change everything. But it’s only a big deal because we’re exerting our will to make it a big deal. And the “why” behind that is what I’m focusing on.

For right or wrong, we have now decided we’d rather trust distribution than centralization. There is much evidence to support that view. Concentration of power also means concentration of risk. The opportunity for corruption skyrockets. Big things tend to rot from the inside out. This is not a new discovery on our part. We’ve known for at least a few centuries that “absolute power corrupts absolutely.”

As the world consolidated it also became more corrupt. But it was always a trade off we felt we had to make. Again, the collective will of the people is the story thread to follow here. Consolidation brought many benefits. We wouldn’t be where we are today if it wasn’t for hierarchies, in one form or another. So we willing subjugated ourselves to someone – somewhere – hoping to maintain a delicate balance where the risk of corruption was outweighed by a personal gain. I remember asking the Atlantic’s noted correspondent, James Fallows, a question when I met him once in China. I asked how the average Chinese citizen could tolerate the paradoxical mix of rampant economical entrepreneurialism and crushing ideological totalitarianism. His answer was, “As long as their lives are better today than they were yesterday, and promise to be even better tomorrow, they’ll tolerate it.”

That pretty much summarizes our attitudes towards control. We tolerated it because if we wanted our lives to continue to improve, we really didn’t have a choice. But perhaps we do now. And that possibility has pushed our collective will away from consolidated power hubs and towards decentralized networks. Blockchain gives us another way to do that. It promises a way to work around Big Money, Big Banks, Big Government and Big Business. We are eager to do so. Why? Because up to now we have had to place our trust in these centralized institutions and that trust has been consistently abused. But perhaps Blockchain technology has found a way to distribute trust in a foolproof way. It appears to offer a way to make everything better without the historic tradeoff of subjugating ourselves to anyone.

However, when we move our trust to a network we also make that trust subject to unanticipated network effects. That may be the new trade-off we have to make. Increasingly, our technology is dependent on networks, which – by their nature – are complex adaptive systems. That’s why I keep preaching the same message – we have to understand complexity. We must accept that complexity has interaction affects we could never successfully predict.

It’s an interesting swap to consider – control for complexity. Control has always offered us the faint comfort of an illusion of predictability. We hoped that someone who knew more than we did was manning the controls. This is new territory for us. Will it be better? Who can say? But we seem to building an irreversible head of steam in that direction.

Fat Heads and Long Tails: Living in a Viral World

I, and the rest of the world, bought “Fire and Fury: Inside the Trump White House” last Friday. Forbes reports that in one weekend, it has climbed to the top of the Amazon booklist, and demand for the book is “unprecedented.”

We use that word a lot now. Our world seems to be a launching pad for “unprecedented” events. Nassim Nicholas Taleb’s black swans used to be the exception — that was the definition of  the term. Now they’re becoming the norm. You can’t walk down the street without accidentally kicking one.

Our world is a hyper-connected feedback loop that constantly engenders the “unprecedented”: storms, blockbusters, presidents. In this world, historical balance has disappeared and all bets are off.

One of the many things that has changed is the distribution pattern of culture. In 2006, Chris Anderson wrote the book “The Long Tail,” explaining how online merchandising, digital distribution and improved fulfillment logistics created an explosion of choices. Suddenly, the distribution curve of pretty much everything  — music, books, apps, video, varieties of cheese — grew longer and longer, creating Anderson’s “Long Tail.”

But let’s flip the curve and look at the other end. The curve has not just grown longer. The leading edge of it has also grown on the other axis. Heads are now fatter.

“Fire and Fury” has sold more copies in a shorter period of time than would have ever been possible at any other time in history. That’s partly because of the  same factors that created the Long Tail: digital fulfillment and more efficient distribution. But the biggest factor is that our culture is now a digitally connected echo chamber that creates the perfect conditions for virality. Feeding frenzies are now an essential element of our content marketing strategies.

If ever there was a book written to go viral, it’s “Fire and Fury.” Every page should have a share button. Not surprisingly, given its subject matter,  the book has all the subtlety and nuance of a brick to the head. This is a book built to be a blockbuster.

And that’s the thing about the new normal of virality: Blockbusters become the expectation out of the starting gate.

As I said last week, content producers have every intention of addicting their audience, shooting for binge consumption of each new offering. Wolff wrote this book  to be consumed in one sitting.

As futurist (or “futuristorian”) Brad Berens writes, the book is “fascinating in an I-can’t-look-away-at-the-17-car-pileup-with-lots-of-ambulances way.” But there’s usually a price to be paid for going down the sensational path. “Fire and Fury” has all the staying power of a “bag of Cheetos.” Again, Berens hits the nail on the head: “You can measure the relevance of Wolff’s book in half-lives, with each half-life being about a day.”

One of the uncanny things about Donald Trump is that he always out-sensationalizes any attempt to sensationalize him. He is the ultimate “viral” leader, intentionally — or not — the master of the “Fat Head.” Today that head is dedicated to Wolff’s book. Tomorrow, Trump will do something to knock it out of the spotlight.

Social media analytics developer Tom Maiaroto found the average sharing lifespan of viral content is about a day. So while the Fat Head may indeed be Fat, it’s also extremely short-lived. This means that, increasingly, content intended to go viral  — whether it be books, TV shows or movies — is intentionally developed to hit this short but critical window.

So what is the psychology behind virality? What buttons have to be pushed to start the viral cascade?

Wharton Marketing Professor Jonah Berger, who researched what makes things go viral, identified six principles: Social Currency, Memory Triggers, Emotion, Social Proof, Practical Value and Stories. “Fire and Fury” checks almost all these boxes, with the possible exception of practical value.

But it most strongly resonates with social currency, social proof and emotion. For everyone who thinks Trump is a disaster of unprecedented proportions, this book acts as kind of an ideological statement, a social positioner, an emotional rant and confirmation bias all rolled into one. It is a tribal badge in print form.

When we look at the diffusion of content through the market, technology has again acted as a polarizing factor. New releases are pushed toward the outlier extremes, either far down the Long Tail or squarely aimed at cashing in on the Fat Head. And if it’s the latter of these, then going viral becomes critical.

Expect more fire. Expect more fury.

Watching TV Through The Overton Window

Tell me, does anyone else have a problem with this recent statement by HBO CEO Richard Plepler: “I am trying to build addicts — and I want people addicted to something every week”?

I read this in a MediaPost column about a month ago. At the time, I filed it away as something vaguely troubling. I just checked and found no one else had commented on it. Nothing. We all collectively yawned as we checked out the next series to binge watch. That’s just what we do now.

When did enabling addiction become a goal worth shooting for? What made the head of a major entertainment corporation think it was OK to use a term that is defined as “persistent, compulsive use of a substance known to the user to be harmful” to describe a strategic aspiration? And, most troubling of all, when did we all collectively decide that that was OK?

Am I overreacting? Is bulk consuming an entire season’s worth of “Game of Thrones” or “Big Little Lies” over a 48-hour period harmless?

Speaking personally, when I emerge from my big-screen basement cave after watching more than two episodes of anything in a row, I feel like crap. And there’s growing evidence that I’m not alone. I truly believe this is not a healthy direction for us.

But my point here is not to debate the pros and cons of binge watching. My point is that Plepler’s statement didn’t cause any type of adverse reaction. We just accepted it. And that may because of something called the Overton Window.

The Overton Window was named after Joseph Overton, who developed the concept at a libertarian think tank  — the Mackinac Center for Public Policy — in the mid-1990s.

Typically, the term is used to talk about the range of policies acceptable to the public in the world of politics. In the middle of the window lies current policy. Moving out from the center in both directions (right and left) are the degrees of diminishing acceptability. In order, these are: Popular, Sensible, Acceptable, Radical and Unthinkable.

Overton_Window_diagram.svgThe window can move, with ideas that were once unthinkable eventually becoming acceptable or even popular due to the shifting threshold of public acceptance. The concept, which has roots going back over 150 years, has again bubbled to the top of our consciousness thanks to Trumpian politics, which make “extreme things look normal,” according to a post on Vox.

Political strategists have embraced and leveraged the concept to try to bring their own agendas within the ever-moving window. Because here’s the interesting thing about the Overton Window: If you want to move it substantially, the fastest way to do it is to float something outrageous to the public and ask them to consider it. Once you’ve set a frame of consideration towards the outliers, it tends to move the window substantially in that direction, bringing everything less extreme suddenly within the bounds of the window.

This has turned The Overton Window into a political strategic tug of war, with the right and left battling to shift the window by increasingly moving to the extremes.

What’s most intriguing about the Overton Window is how it reinforces the idea that much of our social sensibility is relative rather than absolute. Our worldview is shaped not only by what we believe, but what we believe others will find acceptable. Our perspective is constantly being framed relative to societal norms.

Perhaps — just perhaps — the CEO of HBO can now use the word “addict” when talking about entertainment because our perspective has been shifted toward an outlying idea that compulsive consumption is OK, or even desirable.

But I have to call bullshit on that. I don’t believe it’s OK. It’s not something we as an industry — whether that industry is marketing or entertainment — should be endorsing. It’s not ennobling us; it’s enabling us.

There’s a reason why the word “addict” has a negative connotation. If our “window” of acceptability has shifted to the point where we just blithely accept these types of statements and move on, perhaps it’s time to shift the window in the opposite direction.

Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…