Thinking Beyond the Brand

Apparently boring is the new gold standard of branding, at least when it comes to ranking countries on the international stage. According to a new report from US News, the Wharton School and Y&R’s BAV Group, Canada is the No. 2 country in the world. That’s right – Canada – the country that Robin Williams called “a really nice apartment over a meth lab.”

The methodology here is interesting. It was basically a brand benchmarking study. That’s what BAV does. They’re the “world’s largest and leading empirical study of brands” And Canada’s brand is: safe, slightly left leaning, polite, predictable and – yes – boring. Oh – and we have lakes and mountains.

Who, you may ask, beat us? Switzerland – a country that is safe, slightly left leading, polite, predictable and – yes – boring. Oh – and they have lakes and mountains too.

This study has managed to reduce entire countries to a type of cognitive short hand we call a brand. As a Canadian, I can tell you this country contains multitudes – some good, some bad – and remarkably little of it is boring. We’re like an iceberg (literally, in some months) – there’s a lot that lies under the surface. But as far as the world cares, you already know everything you need to know about Canada and no further learning is required.

That’s the problem with branding. We rely more and more on whatever brand perceptions we already have in place without thinking too much about whether they’re based on valid knowledge. We certainly don’t go out of our way to challenge those perceptions. What was originally intended to sell dish soap is being used as a cognitive short cut for everything we do. We rely on branding – instant know-ability – or what I called labelability in a previous column. We spend more and more of our time knowing and less and less of it learning.

Branding is a mental rot that is reducing everything to a broadly sketched caricature.

Take politics for example. That same BAV group turned their branding spotlight on candidates for the next presidential election. Y&R CEO David Sable explored just how important branding will be in 2020. Spoiler alert: it will be huge.

When BAV looked at the brands of various candidates, Trump continues to dominate. This was true in 2016, and depending on the variables of fate currently in play, it could be true in 2020 as well. “We showed how fresh and powerful President Trump was as a brand, and just how tired and weak Hillary was… despite having more esteem and stature.”

Sable prefaced his exploration with this warning: “What follows is not a political screed, endorsement or advocacy of any sort. It is more a questioning of ourselves, with some data thrown to add to the interrogative.” In other words, he’s saying that this is not really based on any type of rational foundation; it’s simply evaluating what people believe. And I find that particular mental decoupling to be troubling.

This idea of cognitive shorthand is increasingly prevalent in an attention deficit world. Everything is being reduced to a brand. The problem with this is that once that brand has been “branded” it’s very difficult to shake. Our world is being boiled down to branding and target marketing. Our brains have effectively become pigeon holed. That’s why Trump was right when he said, “I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn’t lose any voters”

We have a dangerous spiral developing. In a world with an escalating amount of information, we increasingly rely on brands/beliefs for our rationalization of the world. When we do expose ourselves to information, we rely on information that reinforces those brands and beliefs. Barack Obama identified this in a recent interview with David Letterman: “One of the biggest challenges we have to our democracy is the degree to which we don’t share a common baseline of facts. We are operating in completely different information universes. If you watch Fox News, you are living on a different planet than you are if you listen to NPR.”

Our information sources have to be “on-brand”. And those sources are filtered by algorithms shaped by our current beliefs. As our bubble solidifies, there is nary a crack left for a fresh perspective to sneak in.

 

The Decentralization of Trust

Forget Bitcoin. It’s a symptom. Forget even Blockchain. It’s big – but it’s technology. That makes it a tool. Which means it’s used at our will. And that will is the real story. Our will is always the real story – why do we build the tools we do? What is revolutionary is that we’ve finally found a way to decentralize trust. That runs against the very nature of how we’ve defined trust for centuries.

And that’s the big deal.

Trust began by being very intimate – ruled by our instincts in a face-to-face context. But for the last thousand years, our history has been all about concentration and the mass of everything – including whom we trust. We have consolidated our defense, our government, our commerce and our culture. In doing so, we have also consolidated our trust in a few all-powerful institutions.

But the past 20 years have been all about decentralization and tearing down power structures, as we invent new technologies to let us do that. In that vien, Blockchain is a doozy. It will change everything. But it’s only a big deal because we’re exerting our will to make it a big deal. And the “why” behind that is what I’m focusing on.

For right or wrong, we have now decided we’d rather trust distribution than centralization. There is much evidence to support that view. Concentration of power also means concentration of risk. The opportunity for corruption skyrockets. Big things tend to rot from the inside out. This is not a new discovery on our part. We’ve known for at least a few centuries that “absolute power corrupts absolutely.”

As the world consolidated it also became more corrupt. But it was always a trade off we felt we had to make. Again, the collective will of the people is the story thread to follow here. Consolidation brought many benefits. We wouldn’t be where we are today if it wasn’t for hierarchies, in one form or another. So we willing subjugated ourselves to someone – somewhere – hoping to maintain a delicate balance where the risk of corruption was outweighed by a personal gain. I remember asking the Atlantic’s noted correspondent, James Fallows, a question when I met him once in China. I asked how the average Chinese citizen could tolerate the paradoxical mix of rampant economical entrepreneurialism and crushing ideological totalitarianism. His answer was, “As long as their lives are better today than they were yesterday, and promise to be even better tomorrow, they’ll tolerate it.”

That pretty much summarizes our attitudes towards control. We tolerated it because if we wanted our lives to continue to improve, we really didn’t have a choice. But perhaps we do now. And that possibility has pushed our collective will away from consolidated power hubs and towards decentralized networks. Blockchain gives us another way to do that. It promises a way to work around Big Money, Big Banks, Big Government and Big Business. We are eager to do so. Why? Because up to now we have had to place our trust in these centralized institutions and that trust has been consistently abused. But perhaps Blockchain technology has found a way to distribute trust in a foolproof way. It appears to offer a way to make everything better without the historic tradeoff of subjugating ourselves to anyone.

However, when we move our trust to a network we also make that trust subject to unanticipated network effects. That may be the new trade-off we have to make. Increasingly, our technology is dependent on networks, which – by their nature – are complex adaptive systems. That’s why I keep preaching the same message – we have to understand complexity. We must accept that complexity has interaction affects we could never successfully predict.

It’s an interesting swap to consider – control for complexity. Control has always offered us the faint comfort of an illusion of predictability. We hoped that someone who knew more than we did was manning the controls. This is new territory for us. Will it be better? Who can say? But we seem to building an irreversible head of steam in that direction.

Fat Heads and Long Tails: Living in a Viral World

I, and the rest of the world, bought “Fire and Fury: Inside the Trump White House” last Friday. Forbes reports that in one weekend, it has climbed to the top of the Amazon booklist, and demand for the book is “unprecedented.”

We use that word a lot now. Our world seems to be a launching pad for “unprecedented” events. Nassim Nicholas Taleb’s black swans used to be the exception — that was the definition of  the term. Now they’re becoming the norm. You can’t walk down the street without accidentally kicking one.

Our world is a hyper-connected feedback loop that constantly engenders the “unprecedented”: storms, blockbusters, presidents. In this world, historical balance has disappeared and all bets are off.

One of the many things that has changed is the distribution pattern of culture. In 2006, Chris Anderson wrote the book “The Long Tail,” explaining how online merchandising, digital distribution and improved fulfillment logistics created an explosion of choices. Suddenly, the distribution curve of pretty much everything  — music, books, apps, video, varieties of cheese — grew longer and longer, creating Anderson’s “Long Tail.”

But let’s flip the curve and look at the other end. The curve has not just grown longer. The leading edge of it has also grown on the other axis. Heads are now fatter.

“Fire and Fury” has sold more copies in a shorter period of time than would have ever been possible at any other time in history. That’s partly because of the  same factors that created the Long Tail: digital fulfillment and more efficient distribution. But the biggest factor is that our culture is now a digitally connected echo chamber that creates the perfect conditions for virality. Feeding frenzies are now an essential element of our content marketing strategies.

If ever there was a book written to go viral, it’s “Fire and Fury.” Every page should have a share button. Not surprisingly, given its subject matter,  the book has all the subtlety and nuance of a brick to the head. This is a book built to be a blockbuster.

And that’s the thing about the new normal of virality: Blockbusters become the expectation out of the starting gate.

As I said last week, content producers have every intention of addicting their audience, shooting for binge consumption of each new offering. Wolff wrote this book  to be consumed in one sitting.

As futurist (or “futuristorian”) Brad Berens writes, the book is “fascinating in an I-can’t-look-away-at-the-17-car-pileup-with-lots-of-ambulances way.” But there’s usually a price to be paid for going down the sensational path. “Fire and Fury” has all the staying power of a “bag of Cheetos.” Again, Berens hits the nail on the head: “You can measure the relevance of Wolff’s book in half-lives, with each half-life being about a day.”

One of the uncanny things about Donald Trump is that he always out-sensationalizes any attempt to sensationalize him. He is the ultimate “viral” leader, intentionally — or not — the master of the “Fat Head.” Today that head is dedicated to Wolff’s book. Tomorrow, Trump will do something to knock it out of the spotlight.

Social media analytics developer Tom Maiaroto found the average sharing lifespan of viral content is about a day. So while the Fat Head may indeed be Fat, it’s also extremely short-lived. This means that, increasingly, content intended to go viral  — whether it be books, TV shows or movies — is intentionally developed to hit this short but critical window.

So what is the psychology behind virality? What buttons have to be pushed to start the viral cascade?

Wharton Marketing Professor Jonah Berger, who researched what makes things go viral, identified six principles: Social Currency, Memory Triggers, Emotion, Social Proof, Practical Value and Stories. “Fire and Fury” checks almost all these boxes, with the possible exception of practical value.

But it most strongly resonates with social currency, social proof and emotion. For everyone who thinks Trump is a disaster of unprecedented proportions, this book acts as kind of an ideological statement, a social positioner, an emotional rant and confirmation bias all rolled into one. It is a tribal badge in print form.

When we look at the diffusion of content through the market, technology has again acted as a polarizing factor. New releases are pushed toward the outlier extremes, either far down the Long Tail or squarely aimed at cashing in on the Fat Head. And if it’s the latter of these, then going viral becomes critical.

Expect more fire. Expect more fury.

Watching TV Through The Overton Window

Tell me, does anyone else have a problem with this recent statement by HBO CEO Richard Plepler: “I am trying to build addicts — and I want people addicted to something every week”?

I read this in a MediaPost column about a month ago. At the time, I filed it away as something vaguely troubling. I just checked and found no one else had commented on it. Nothing. We all collectively yawned as we checked out the next series to binge watch. That’s just what we do now.

When did enabling addiction become a goal worth shooting for? What made the head of a major entertainment corporation think it was OK to use a term that is defined as “persistent, compulsive use of a substance known to the user to be harmful” to describe a strategic aspiration? And, most troubling of all, when did we all collectively decide that that was OK?

Am I overreacting? Is bulk consuming an entire season’s worth of “Game of Thrones” or “Big Little Lies” over a 48-hour period harmless?

Speaking personally, when I emerge from my big-screen basement cave after watching more than two episodes of anything in a row, I feel like crap. And there’s growing evidence that I’m not alone. I truly believe this is not a healthy direction for us.

But my point here is not to debate the pros and cons of binge watching. My point is that Plepler’s statement didn’t cause any type of adverse reaction. We just accepted it. And that may because of something called the Overton Window.

The Overton Window was named after Joseph Overton, who developed the concept at a libertarian think tank  — the Mackinac Center for Public Policy — in the mid-1990s.

Typically, the term is used to talk about the range of policies acceptable to the public in the world of politics. In the middle of the window lies current policy. Moving out from the center in both directions (right and left) are the degrees of diminishing acceptability. In order, these are: Popular, Sensible, Acceptable, Radical and Unthinkable.

Overton_Window_diagram.svgThe window can move, with ideas that were once unthinkable eventually becoming acceptable or even popular due to the shifting threshold of public acceptance. The concept, which has roots going back over 150 years, has again bubbled to the top of our consciousness thanks to Trumpian politics, which make “extreme things look normal,” according to a post on Vox.

Political strategists have embraced and leveraged the concept to try to bring their own agendas within the ever-moving window. Because here’s the interesting thing about the Overton Window: If you want to move it substantially, the fastest way to do it is to float something outrageous to the public and ask them to consider it. Once you’ve set a frame of consideration towards the outliers, it tends to move the window substantially in that direction, bringing everything less extreme suddenly within the bounds of the window.

This has turned The Overton Window into a political strategic tug of war, with the right and left battling to shift the window by increasingly moving to the extremes.

What’s most intriguing about the Overton Window is how it reinforces the idea that much of our social sensibility is relative rather than absolute. Our worldview is shaped not only by what we believe, but what we believe others will find acceptable. Our perspective is constantly being framed relative to societal norms.

Perhaps — just perhaps — the CEO of HBO can now use the word “addict” when talking about entertainment because our perspective has been shifted toward an outlying idea that compulsive consumption is OK, or even desirable.

But I have to call bullshit on that. I don’t believe it’s OK. It’s not something we as an industry — whether that industry is marketing or entertainment — should be endorsing. It’s not ennobling us; it’s enabling us.

There’s a reason why the word “addict” has a negative connotation. If our “window” of acceptability has shifted to the point where we just blithely accept these types of statements and move on, perhaps it’s time to shift the window in the opposite direction.

Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…

What Price Privacy?

As promised, I’m picking up the thread from last week’s column on why we seem okay with trading privacy for convenience. The simple – and most plausible – answer is that we’re really not being given a choice.

As Mediapost Senior Editor Joe Mandese pointed out in an very on-point comment, what is being creating is an transactional marketplace where offers of value are exchanged for information.:

“Like any marketplace, you have to have your information represented in it to participate. If you’re not “listed” you cannot receive bids (offers of value) based on who you are.”

Amazon is perhaps the most relevant example of this. Take Alexa and Amazon Web Services (AWS). Alexa promises to “make your life easier and more fun.” But this comes at a price. Because Alexa is voice activated, it’s always listening. That means that privacy of anything we say in our homes has been ceded to Amazon through their terms of service. The same is true for Google Assist and Apple Siri.

But Amazon is pushing the privacy envelope even further as they test their new in-home delivery service – Amazon Key. In exchange for the convenience of having your parcels delivered inside your home when you’re away, you literally give Amazon the keys to your home. Your front door will have a smart door lock that can be opened via the remote servers of AWS. Opt in to this and suddenly you’ve given Amazon the right to not only listen to everything you say in your home but also to enter your home whenever they wish.

How do you feel about that?

This becomes the key question. How do we feel about the convenience/privacy exchange. But it turns out that our response depends in large part on how that question is framed. In a study conducted in 2015 by the Annenberg School for Communications at the University of Pennsylvania, researchers gathered responses from participants probing their sensitivity around the trading of privacy for convenience. Here is a sampling of the results:

  • 55% of respondents disagreed with the statement: “It’s OK if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
  • 71% disagreed with: “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.
  • 91% disagreed that: “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing”

Here, along the spectrum of privacy pushback, we start to see what the real problem is. We’re willing to exchange private information, as long as we’re aware of all that is happening and feel in control of it. But that, of course, is unrealistic. We can’t control it. And even if we could, we’d soon learn that the overhead required to do so is unmanageable. It’s why Vint Cerf said we’re going to have to learn to live with transparency.

Again, as Mr. Mandese points out, we’re really not being given a choice. Participating in the modern economy required us anteing up personal information. If we choose to remain totally private, we cut ourselves off from a huge portion of what’s available. And we are already at the point where the vast majority of us really can’t opt out. We all get pissed off when we hear of a security breach a la the recent Equifax debacle. Our privacy sensitivities are heightened for a day or two and we give lip service to outrage. But unless we go full out Old Order Amish, what are our choices?

We may rationalize the trade off by saying the private information we’re exchanging for services is not really that sensitive. But that’s where the potential threat of Big Data comes in. Gather enough seemingly innocent data and soon you can start predicting with startling accuracy the aspects of our lives that we are sensitive about. We run headlong into the Target Pregnant Teen dilemma. And that particular dilemma becomes thornier as the walls break down between data siloes and your personal information becomes a commodity on an open market.

The potential risk of trading away our privacy becomes an escalating aspect here – it’s the frog in boiling water syndrome. It starts innocently but can soon develop into a scenario that will keep most anyone up at night with the paranoiac cold sweats. Let’s say the data is used for targeting – singling us out of the crowd for the purpose of selling stuff to us. Or – in the case of governments – seeing if we have a proclivity for terrorism. Perhaps that isn’t so scary if Big Brother is benevolent and looking out for our best interests. But what if Big Brother becomes a bully?

There is another important aspect to consider here, and one that may have dire unintended consequences. When our personal data is used to make our world more convenient for us, that requires a “filtering” of that world by some type of algorithm to remove anything that algo determines to be irrelevant or uninteresting to us. Essentially, the entire physical world is “targeted” to us. And this can go horribly wrong, as we saw in the last presidential election. Increasingly we live in a filtered “bubble” determined by things beyond our control. Our views get trapped in an echo chamber and our perspective narrows.

But perhaps the biggest red flag is the fact that in signing away our privacy by clicking accept, we often also sign away any potential protection when things do go wrong. In another study called “The Biggest Lie on the Internet,” researchers found that when students were presented with a fictitious terms of service and privacy policy, 74% skipped reading it. And those that took the time to read didn’t take very much time – just 73 seconds on average. What almost no one caught were “gotcha clauses” about data sharing with the NSA and giving up your first-born child. While these were fictitious, real terms of service and privacy notifications often include clauses that include total control over the information gathered about you and giving up your right to sue if anything went bad. Even if you could sue, there might not be anyone left to sue. One analyst calculated that even if all the people who had their financial information stolen from Equifax won a settlement, it would actually amount to about $81 dollars.

 

Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?