Why Do Cities Work?

It always amazes me how cities just seem to work. Take New York – for example. How the hell does everything a city of nine million needs to continue to exist happen? Cities are perhaps the best example I can think of how complex adaptive systems can work in the real world. They may be the answer to our future as the world becomes a more complex and connected place.

It’s not due to any centralized sense of communal collaboration. If anything, cities make us more individualistic. Small towns are much more collaborative. I feel more anonymous and autonomous in a big city than I ever do in a small town. It’s something else, more akin to Adam Smith’s Invisible Hand – but different. Millions of individual agents can all do their own thing based on their own requirements, but it works out okay for all involved.

Actually, according to Harvard economist Ed Glaeser, cities are more than just okay. He calls them mankind’s greatest invention. “So much of what humankind has achieved over the past three millennia has come out of the remarkable collaborative creations that come out of cities. We are a social species. We come out of the womb with the ability to sop up information from people around us. It’s almost our defining characteristic as creatures. And cities play to that strength. Cities enable us to learn from other people.”

Somehow, cities manage to harness the collective potential of their population without dipping into chaos. This is all the more amazing when you consider that cities aren’t natural for humans – at least – not in evolutionary terms. If you considered just that, we should all live in clusters of 150 people – otherwise known as Dunbar’s number. That’s the brain’s cognitive limit for keeping track of our own immediate social networks. It we’re looking for a magic number in terms of maximizing human cooperation and collaboration that would be it. But somehow cities allow us to far surpass that number and still deliver exponential returns.

Most of our natural defense mechanisms are based on familiarity. Trust, in it’s most basic sense, is Pavlovian. We trust strangers who happen to resemble people we know and trust. We are wary of strangers that remind us of people who have taken advantage of us. We are primed to trust or distrust in a few milliseconds, far under the time threshold of rational thought. Humans evolved to live in communities where we keep seeing the same faces over and over – yet cities are the antithesis of this.

Cities work because it’s in everyone’s best interest to make cities work. In a city, people may not trust each other, but they do trust the system. And it’s that system – or rather – thousands of complementary systems, that makes cities work. We contribute to these systems because we have a stake in them. The majority of us avoid the Tragedy of the Commons because we understand that if we screw the system, the system becomes unsustainable and we all lose. There is an “invisible network of trust” that makes cities work.

The psychology of this trust is interesting. As I mentioned before, in evolutionary terms, the mechanisms that trigger trust are fairly rudimentary: Familiarity = Trust. But system trust is a different beast. It relies on social norms and morals – on our inherent need to conform to the will of the herd. In this case, there is at least one degree of separation between trust and the instincts that govern our behaviors. Think of it as a type of “meta-trust.” We are morally obligated to contribute to the system as long as we believe the system will increase our own personal well-being.

This moral obligation requires feedback. There needs to be some type of loop that shows our that our moral behaviors are paying off for us. As long as that loop is working, it creates a virtuous cycle. Moral behaviors need to lead to easily recognized rewards, both individually and collectively. As long as we have this loop, we will continue to be governed by social norms that maintain the systems of a city.

When we look to cities to provide us clues on how to maintain stability in a more connected world, we need to understand this concept of feedback. Cities provide feedback through physical proximity. When cities start to break down, the results become obvious to all who live there. But when it’s digital bonds rather than physical ones that link our networks, feedback becomes trickier. We need to ponder other ways of connecting cause, effect and consequences. As we move from physical communities to ideological ones, we have to overcome the numbing effects of distance.

 

Tempest in a Tweet-Pot

On February 16, a Facebook VP of Ads named Rob Goldman had a bad day. That was the day the office of Special Counsel, Robert Mueller, released an indictment of 13 Russian operatives who interfered in the U.S. election. Goldman felt he had to comment via a series of tweets that appeared to question the seriousness with which the Mueller investigation had considered the ads placed by Russians on Facebook. Nothing much happened for the rest of the day. But on February 17, after the US Tweeter-in-Chief – Donald Trump – picked up the thread, Facebook realized the tweets had turned into a “shit sandwich” and to limit the damage, Goldman had to officially apologize.

It’s just one more example of a personal tweet blowing into a major news event. This is happening with increasingly irritating frequency. So today, I thought I’d explore why.

Personal Brand vs Corporate Brand

First, why did Rob Goldman feel he had to go public with his views anyway? He did because he could. We all have varying degrees of loyalty to our employer and I’m sure the same is true for Mr. Goldman. Otherwise he wouldn’t have swallowed crow a few days later with his public mea culpa. But our true loyalties go not to the brand we work for, but the brand we are. Goldman – like me, like you, like all of us – is building his personal brand. Anyone who’s says they’re not – yet posts anything online – is in denial. Goldman’s brand, according to his twitter account, is “Student, seeker, raconteur, burner. ENFP.” That is followed with the disclaimer “Views are mine.” And you know what? This whole debacle has been great for Goldman’s brand, at least in terms of audience size. Before February 16th, he had about 1500 followers. When I checked, that had swelled to almost 12,000. Brand Goldman is on a roll!

The idea of a personal brand is new – just a few decades old. It really became amplified through the use of social media. Suddenly, you could have an audience -and not just any audience, but an audience numbering in the millions.

Before that, the only people who could have been said to have personal brands were artists, authors and musicians. They made their living by sharing who they were with us.

For the rest of us, our brands were trapped in our own contexts. Only the people who knew us were exposed to our brands. But the amplification of social media suddenly exposes our brand to a much broader audience. And when things go viral, like they did on February 17, millions suddenly became aware of Rob Goldman and his tweet without knowing anything more than that he was a VP of Ads for Facebook.

It was that connection that created the second issue for Goldman. When we speak for our own personal brands, we can say, “views are mine” but the problem always comes when things blow up, as they did for Rob Goldman. None of his tweets were passed by anyone at Facebook, yet he had suddenly become a spokesperson for the corporation. And for those eager to accept his tweets as fact, they suddenly became the “truth.”

Twitter: “Truth” Without Context

Increasingly, we’re not really that interested in the truth. What we are interested in is our beliefs and our own personal truth. This is the era of “Post Truth” – the Oxford Dictionary word of the year for 2016 – defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’.

Truth was a commonly understood base that could be supported by facts. Now, truth is in the eye of the beholder. Common understandings are increasingly difficult to come to as the world continues to fragment and become more complex. How can we possibly come to a common understanding of what is “true” when any issue worth discussing is complex? This is certainly true of the Mueller investigation. To try to distill the scope of it to 900 words – about the length of this column – would be virtually impossible. To reduce it to 280 characters – the limits of a tweet and one- twentieth the length of this column – well, there we should not tread. But, of course, we do.

This problem is exacerbated by the medium itself. Twitter is a channel that encourages “quipiness.” When we’re tweeting, we all want to be Oscar Wilde. Again, writing this column usually takes me 3 to 4 hours, including time to do some research, create a rough outline and then do the actual writing. That’s not an especially long time, but the process does allow some time for mental reflection and self-editing. The average tweet takes less than a minute to write – probably less to think about – and then it’s out there, a matter of record, irretrievable. You should find it more than a little terrifying that this is a chosen medium for the President of the United States and one that is increasingly forming our world-view.

Twitter is also not a medium that provides much support for irony, sarcasm or satire. In the Post-Truth era, we usually accept tweets as facts, especially when they come from someone who is a somewhat official position, as in the case of Rob Goldman. But at best, they’re abbreviated opinions.

In the light of all this, one has to appreciate Mr. Goldman’s Twitter handle: @robjective.

Sharing a Little about the Sharing Economy

In the last week, I’ve had first hand experience with the sharing economy, using both Uber and Air BnB. I – not surprisingly – am a sucker for disruption and will gladly adopt new technologies. I appreciate the rational logic of a well thought out platform that promises to be a game changer. I push my wife’s comfort level to the breaking point, trying mightily to maintain the balance between delightful discovery and that cold stare that means I’ve completely messed up this time. It is in that spirit – and with the admitted bias of being a sample of one – that I share some of my macro-level observations.

Creating an Opening for Innovation

To me, using the term the “sharing economy” doesn’t quite cut it. That only explains one aspect of this disruption – the supply side. What is really happening here is the democratization and fragmentation of a previously verticalized market, where the platform creates a new type of one-to-one market connection. That spreads the market horizontally, which in turn opens a wide door for participation at all levels. And that, inevitably, spurs innovation. When you allow everyone to be creative – rather than just a few within a vertically integrated chain who have it in their job description – the pace of innovation can’t help but accelerate.

Disruptive Platforms and Network Effects

Innovation is a good thing, but there is another side to this. If you allow for rampant innovation and facilitate one-to-one connections at all levels of the market, you are going to have network effects. Markets become more chaotic and less predictable. The rising tide of innovation will eventually raise all boats, but it also means the waters can get a little choppy on the way. Disruptive platforms strip away traditional control systems – corporate oversight, traditional forms of consumer protection and legislative regulation. All faith – on both sides of the market – is placed on the design of the platform to ensure self-correcting regulation. There’s just one problem with that…

Compression of Pendulum Markets

When you depend on self-correction in a dynamic market, you forego stability that typically comes from vertical oversight. Not only do you remove the oversight but you also remove predictability. There are new players entering and exiting the market all the time. And even if the players stabilize, experience has limited value in a marketplace that may not do tomorrow what it did yesterday.

All sharing platforms – Uber and AirBnB included – depend on market feedback to ensure self correction. In these two cases, they have well thought out market control mechanisms but feedback is – by necessity – a reactive rather than a proactive device. You can anticipate with reasonable confidence in a stable, controlled market but you can’t in a dynamic, networked market. All you can do is respond. This creates a pendulum effect. Constant connection to the platform means that feedback is fast, but the physics of a pendulum mean that the volatility of the swings back and forth are greatest at the beginning and stabilize over time.

This creates what I would call the Bubbles and Backlash phenomenon. As markets open up, new suppliers jump on the bandwagon. Some are great, some are horrible, some are mediocre. But it will take the platform and it’s self-correcting mechanisms some time to sort them out. Also, we have to hope the mechanisms are reasonably robust against suppliers who want to game the system. I think both Uber and AirBnB are working their way through this particular pain point right now. I find ratings artificially high on many suppliers which whom I’ve had personal experience. There could be a number of reasons for this, including the psychological bias of reciprocity, but I think most platforms have some tweaking to do before the user ratings provide a reasonable frame of expectations.

Inevitable Gaps in the User Experience

Finally, because the travel market is moving from a vertical orientation to a horizontal one, it leaves it up to the user to navigate her way through the various horizontal layers that stack together to create her individual user journey. When you’re in a layer – taking Uber to the airport for example – you’re probably okay. But it’s moving from layer to layer that places a little extra demand on the user. The previous players who inhabited the niches within the vertical ecosystem are understandably reluctant to share their niches with new, disruptive players. Where, for example, do you catch the Uber at the airport?

But All’s Well that Ends Well

In the end, it comes down to a matter of taste. I am an early adopter, so I will always choose disruption over the status quo. For those of a different bent, the vertically integrated path is still open to them. But for all of us, I believe disruption has created a travel marketplace that is more diverse, authentic and rewarding than ever before.

 

Short Sightedness, Sharks and Mental Myopia

2017 was an average year for shark attacks.

And this just in…

By the year 2050 half of the World will be Near Sighted.

What could these two headlines possibly have in common? Well, sit back – I’ll tell you.

First, let’s look at why 2017 was a decidedly non-eventful year – at least when it came to interactions between Selachimorpha (sharks) and Homo (us). Nothing unusual happened. That’s it. There was no sudden spike in Jaws-like incidents. Sharks didn’t suddenly disappear from the world’s oceans. Everything was just – average. Was it the only way that 2017 was uneventful? No. There were others. But we didn’t notice because we were focused on the ways that the world seemed to be going to hell in a handbasket. If we look at 2017 like a bell curve, we were focused on the outliers, not the middle.

There’s no shame in that. That’s what we do. The usual doesn’t make the nightly news. It doesn’t even make our Facebook feed. But here’s the thing..we live most of our live in the middle of the curve, not in the outlier extremes. The things that are most relevant to our lives falls squarely into the usual. But all the communication channels that have been built to channel information to us are focused on the unusual. And that’s because we insist not on being informed, but instead on being amused.

In 1985, Neil Postman wrote the book Amusing Ourselves to Death. In it, he charts how the introduction of electronic media – especially television – hastened our decline into a dystopian existence that shared more than a few parallels with Aldous Huxley’s Brave New World. His warning was pointed, to say the least, “ There are two ways by which the spirit of a culture may be shrivelled,” Postman says. “In the first—the Orwellian—culture becomes a prison. In the second—the Huxleyan—culture becomes a burlesque.” It’s probably worth reminding ourselves of what burlesque means, “a literary or dramatic work that seeks to ridicule by means of grotesque exaggeration or comic imitation.” If the transformation of our culture into burlesque seemed apparent in the 80’s, you’d pretty much have to say it’s a fait accompli 35 years later. Grotesque exaggeration is the new normal., not to mention the new president.

But this steering of our numbed senses towards the extremes has some consequences. As the world becomes more extreme, it requires more extreme events to catch our notice. We are spending more and more of our media consumption time amongst the outliers. And that brings up the second problem.

Extremes – by their nature – tend to be ideologically polarized as well. If we’re going to consider extremes that carry a politically charged message, we stick to the extremes that are well synced with our worldview. In cognitive terms, these ideas are “fluent” – they’re easier to process. The more polarized and extreme a message is, the more important it is that it be fluent for us. We also are more likely to filter out non-fluent messages – messages that we don’t happen to agree with.

The third problem is that we are becoming short-sighted (see, I told you I’d get there, eventually). So not only do we look for extremes, we are increasingly seeking out the trivial. We do so because being informed is increasingly scaring the bejeezus out of us. We don’t look too deep nor do we look too far in the future – because the future is scary. There is the collapse of our climate, World War III with North Korea, four more years of Trump…this stuff is terrifying. Increasingly we spend our cognitive resources looking things that are amusing and immediate. The information we seek has to provide immediate gratification. Yes, we are becoming physically short-sighted because we stare at screens too much, but we’re also becoming mentally myopic as well.

If all this is disturbing, don’t worry. Just grab a Soma and enjoy a Feelie.

Drawing a Line in the Sand for Net Privacy

Ever heard of Strava? The likelihood that you would say yes jumped astronomically on January 27, 2018. That was the day of the Strava security breach. Before that, you had probably never heard of it, unless you happened to be a cyclist or runner.

I’ve talked about Strava before. Then, I was talking about social modality and trying to keep our various selves straight on various social networks. Today, I’m talking about privacy.

Through GPS enabled devices, like a fitness tracker or smartphone, Strava enables you to track your workouts, include the routes you take. Once a year, they aggregate all these activities and publish it as a global heatmap. Over 1 billion workouts are mapped in every corner of the earth. If you zoom in enough, you’ll see my favorite cycling routes in the city I live in. The same is true for everyone who uses the app. Unless – of course – you’ve opted out of the public display of your workouts.

And therein lies the problem. Actually – two problems.

First, problem number one. There is really no reason I shouldn’t share my workouts. The worst you could find out is that I’m a creature of habit when working out. But if I’m a marine stationed at a secret military base in Afghanistan and I start my morning jogging around the perimeter of the base – well – now we have a problem. I just inadvertently highlighted my base on the map for the world to see. And that’s exactly what happened. When the heatmap went live, a university student in Australia happened to notice there were a number of hotspots in the middle of nowhere in Afghanistan and Syria.

On the problem number two. In terms of numbers affected, the Strava breach is a drop in the bucket when you compare it to Yahoo – or Equifax – or Target – or any of the other breaches that have made the news. But this breach was different in a very important way. The victims here weren’t individual consumers. This time national security was threatened. And that moved it beyond the typical “consumer beware” defense that typically gets invoked.

This charts new territory for privacy. The difference in perspective in this breach has heightened sensitivities and moved the conversation in a new direction. Typically, the response when there is a breach is:

  1. You should have known better
  2. You should have taken steps to protect your information; or,
  3. Hmmm, it sucks to be you

Somehow, this response has held up in the previous breaches despite the fact that we all know that it’s almost impossible to navigate the minefield of settings and preferences that lies between you and foolproof privacy. As long as the victims were individuals it was easy to shift blame. This time, however, the victim was the collective “we” and the topic was the hot button of all hot buttons – national security.

Now, one could and should argue that all of these might apply to the unfortunate soldier that decided to take his Fitbit on his run, but I don’t think it will end there. I think the current “opt out” approach to net privacy might have to be considered. The fact is, all these platforms would prefer to gather and have the right to use as they see fit as much of your data as possible. It opens up a number of monetization opportunities for them. Typically, the quid pro quo that is offered back to you – the user – is more functionality and the ability to share to your own social circle. The current ecosystems default starting point is to enable as much sharing and functionality as possible. Humans being human, we will usually go with the easiest option – the default – and only worry about it if something goes wrong.

But as users, we do have the right to push back. We have to realize that opening the full data pipe gives the platforms much more value than we ever receive in return. We’re selling off our own personal data for the modern day equivalent of beads and trinkets. And the traditional corporate response – “you can always opt out if you want” – is simply taking advantage of our own human limitations. The current fallback is that they’re introducing more transparency into their own approaches to privacy, making it easier to understand. While this is a step in the right direction, a more ethical approach would be to take an “opt in” approach, where the default is the maximum protection of our privacy and we have to make a conscious effort to lower that wall.

We’ll see. Opting in puts ethics and profitability on a collision course. For that reason, I can’t ever see the platforms going in that direction unless we insist.

 

 

Sorry, I Don’t Speak Complexity

I was reading about an interesting study from Cornell this week. Dr. Morton Christianson, Co-Director of Cornell’s Cognitive Science Program, and his colleagues explored an interesting linguistic paradox – languages that a lot of people speak – like English and Mandarin – have large vocabularies but relatively simple grammar. Languages that are smaller and more localized have fewer words but more complex grammatical rules.

The reason, Christensen found, has to do with the ease of learning. It doesn’t take much to learn a new word. A couple of exposures and you’ve assimilated it. Because of this, new words become memes that tend to propagate quickly through the population. But the foundations of grammar are much more difficult to understand and learn. It takes repeated exposures and an application of effort to learn them.

Language is a shared cultural component that depends on the structure of a network. We get an inside view of network dynamics from investigating the spread of language. Let’s look at the complexity of a syntactic rule, for example. These are the rules that govern sentence structure, word order and punctuation. In terms of learnability, syntax offers much more complexity than simply understanding the definition of a word. In order to learn syntax, you need repeated exposures to it. And this is where the structure and scope of a network comes in. As Dr. Christensen explains,

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

This research seems to indicate that cultural complexity is first spawned in heavily interlinked and relatively intimate network nodes. For these memes – whether they be language, art, philosophies or ideologies – to bridge to and spread through the greater network, they are often simplified so they’re easier to assimilate.

If this is true, then we have to consider what might happen as our world becomes more interconnected. Will there be a collective “dumbing down” of culture? If current events are any indication, that certainly seems to be the case. The memes with the highest potential to spread are absurdly simple. No effort on the part of the receiver is required to understand them.

But there is a counterpoint to this that does hold out some hope. As Christensen reminds us, “People can self-organize into smaller communities to counteract that drive toward simplification.” From this emerges an interesting yin and yang of cultural content creation. You have more highly connected nodes independent of geography that are producing some truly complex content. But, because of the high threshold of assimilation required, the complexity becomes trapped in that node. The only things that escape are fragments of that content that can be simplified to the point where they can go viral through the greater network. But to do so, they have to be stripped of their context.

This is exactly what caused the language paradox that the team explored. If you have a wide network – or a large population of speakers – there are a greater number of nodes producing new content. In this instance, the words are the fragments, which can be assimilated, and the grammar is the context that gets left behind.

There is another aspect of this to consider. Because of these dynamics unique to a large and highly connected network, the simple and trivial naturally rises to the top. Complexity gets trapped beneath the surface, imprisoned in isolated nodes within the network. But this doesn’t mean complexity goes away – it just fragments and becomes more specific to the node in which it originated. The network loses a common understanding and definition of that complexity. We lose our shared ideological touchstones, which are by necessity more complex.

If we speculate on where this might go in the future, it’s not unreasonable to expect to see an increase in tribalism in matters related to any type of complexity – like religion or politics – and a continuing expansion of simple cultural memes.

The only time we may truly come together as a society is to share a video of a cat playing basketball.

 

 

Thinking Beyond the Brand

Apparently boring is the new gold standard of branding, at least when it comes to ranking countries on the international stage. According to a new report from US News, the Wharton School and Y&R’s BAV Group, Canada is the No. 2 country in the world. That’s right – Canada – the country that Robin Williams called “a really nice apartment over a meth lab.”

The methodology here is interesting. It was basically a brand benchmarking study. That’s what BAV does. They’re the “world’s largest and leading empirical study of brands” And Canada’s brand is: safe, slightly left leaning, polite, predictable and – yes – boring. Oh – and we have lakes and mountains.

Who, you may ask, beat us? Switzerland – a country that is safe, slightly left leading, polite, predictable and – yes – boring. Oh – and they have lakes and mountains too.

This study has managed to reduce entire countries to a type of cognitive short hand we call a brand. As a Canadian, I can tell you this country contains multitudes – some good, some bad – and remarkably little of it is boring. We’re like an iceberg (literally, in some months) – there’s a lot that lies under the surface. But as far as the world cares, you already know everything you need to know about Canada and no further learning is required.

That’s the problem with branding. We rely more and more on whatever brand perceptions we already have in place without thinking too much about whether they’re based on valid knowledge. We certainly don’t go out of our way to challenge those perceptions. What was originally intended to sell dish soap is being used as a cognitive short cut for everything we do. We rely on branding – instant know-ability – or what I called labelability in a previous column. We spend more and more of our time knowing and less and less of it learning.

Branding is a mental rot that is reducing everything to a broadly sketched caricature.

Take politics for example. That same BAV group turned their branding spotlight on candidates for the next presidential election. Y&R CEO David Sable explored just how important branding will be in 2020. Spoiler alert: it will be huge.

When BAV looked at the brands of various candidates, Trump continues to dominate. This was true in 2016, and depending on the variables of fate currently in play, it could be true in 2020 as well. “We showed how fresh and powerful President Trump was as a brand, and just how tired and weak Hillary was… despite having more esteem and stature.”

Sable prefaced his exploration with this warning: “What follows is not a political screed, endorsement or advocacy of any sort. It is more a questioning of ourselves, with some data thrown to add to the interrogative.” In other words, he’s saying that this is not really based on any type of rational foundation; it’s simply evaluating what people believe. And I find that particular mental decoupling to be troubling.

This idea of cognitive shorthand is increasingly prevalent in an attention deficit world. Everything is being reduced to a brand. The problem with this is that once that brand has been “branded” it’s very difficult to shake. Our world is being boiled down to branding and target marketing. Our brains have effectively become pigeon holed. That’s why Trump was right when he said, “I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn’t lose any voters”

We have a dangerous spiral developing. In a world with an escalating amount of information, we increasingly rely on brands/beliefs for our rationalization of the world. When we do expose ourselves to information, we rely on information that reinforces those brands and beliefs. Barack Obama identified this in a recent interview with David Letterman: “One of the biggest challenges we have to our democracy is the degree to which we don’t share a common baseline of facts. We are operating in completely different information universes. If you watch Fox News, you are living on a different planet than you are if you listen to NPR.”

Our information sources have to be “on-brand”. And those sources are filtered by algorithms shaped by our current beliefs. As our bubble solidifies, there is nary a crack left for a fresh perspective to sneak in.

 

The Decentralization of Trust

Forget Bitcoin. It’s a symptom. Forget even Blockchain. It’s big – but it’s technology. That makes it a tool. Which means it’s used at our will. And that will is the real story. Our will is always the real story – why do we build the tools we do? What is revolutionary is that we’ve finally found a way to decentralize trust. That runs against the very nature of how we’ve defined trust for centuries.

And that’s the big deal.

Trust began by being very intimate – ruled by our instincts in a face-to-face context. But for the last thousand years, our history has been all about concentration and the mass of everything – including whom we trust. We have consolidated our defense, our government, our commerce and our culture. In doing so, we have also consolidated our trust in a few all-powerful institutions.

But the past 20 years have been all about decentralization and tearing down power structures, as we invent new technologies to let us do that. In that vien, Blockchain is a doozy. It will change everything. But it’s only a big deal because we’re exerting our will to make it a big deal. And the “why” behind that is what I’m focusing on.

For right or wrong, we have now decided we’d rather trust distribution than centralization. There is much evidence to support that view. Concentration of power also means concentration of risk. The opportunity for corruption skyrockets. Big things tend to rot from the inside out. This is not a new discovery on our part. We’ve known for at least a few centuries that “absolute power corrupts absolutely.”

As the world consolidated it also became more corrupt. But it was always a trade off we felt we had to make. Again, the collective will of the people is the story thread to follow here. Consolidation brought many benefits. We wouldn’t be where we are today if it wasn’t for hierarchies, in one form or another. So we willing subjugated ourselves to someone – somewhere – hoping to maintain a delicate balance where the risk of corruption was outweighed by a personal gain. I remember asking the Atlantic’s noted correspondent, James Fallows, a question when I met him once in China. I asked how the average Chinese citizen could tolerate the paradoxical mix of rampant economical entrepreneurialism and crushing ideological totalitarianism. His answer was, “As long as their lives are better today than they were yesterday, and promise to be even better tomorrow, they’ll tolerate it.”

That pretty much summarizes our attitudes towards control. We tolerated it because if we wanted our lives to continue to improve, we really didn’t have a choice. But perhaps we do now. And that possibility has pushed our collective will away from consolidated power hubs and towards decentralized networks. Blockchain gives us another way to do that. It promises a way to work around Big Money, Big Banks, Big Government and Big Business. We are eager to do so. Why? Because up to now we have had to place our trust in these centralized institutions and that trust has been consistently abused. But perhaps Blockchain technology has found a way to distribute trust in a foolproof way. It appears to offer a way to make everything better without the historic tradeoff of subjugating ourselves to anyone.

However, when we move our trust to a network we also make that trust subject to unanticipated network effects. That may be the new trade-off we have to make. Increasingly, our technology is dependent on networks, which – by their nature – are complex adaptive systems. That’s why I keep preaching the same message – we have to understand complexity. We must accept that complexity has interaction affects we could never successfully predict.

It’s an interesting swap to consider – control for complexity. Control has always offered us the faint comfort of an illusion of predictability. We hoped that someone who knew more than we did was manning the controls. This is new territory for us. Will it be better? Who can say? But we seem to building an irreversible head of steam in that direction.

Fat Heads and Long Tails: Living in a Viral World

I, and the rest of the world, bought “Fire and Fury: Inside the Trump White House” last Friday. Forbes reports that in one weekend, it has climbed to the top of the Amazon booklist, and demand for the book is “unprecedented.”

We use that word a lot now. Our world seems to be a launching pad for “unprecedented” events. Nassim Nicholas Taleb’s black swans used to be the exception — that was the definition of  the term. Now they’re becoming the norm. You can’t walk down the street without accidentally kicking one.

Our world is a hyper-connected feedback loop that constantly engenders the “unprecedented”: storms, blockbusters, presidents. In this world, historical balance has disappeared and all bets are off.

One of the many things that has changed is the distribution pattern of culture. In 2006, Chris Anderson wrote the book “The Long Tail,” explaining how online merchandising, digital distribution and improved fulfillment logistics created an explosion of choices. Suddenly, the distribution curve of pretty much everything  — music, books, apps, video, varieties of cheese — grew longer and longer, creating Anderson’s “Long Tail.”

But let’s flip the curve and look at the other end. The curve has not just grown longer. The leading edge of it has also grown on the other axis. Heads are now fatter.

“Fire and Fury” has sold more copies in a shorter period of time than would have ever been possible at any other time in history. That’s partly because of the  same factors that created the Long Tail: digital fulfillment and more efficient distribution. But the biggest factor is that our culture is now a digitally connected echo chamber that creates the perfect conditions for virality. Feeding frenzies are now an essential element of our content marketing strategies.

If ever there was a book written to go viral, it’s “Fire and Fury.” Every page should have a share button. Not surprisingly, given its subject matter,  the book has all the subtlety and nuance of a brick to the head. This is a book built to be a blockbuster.

And that’s the thing about the new normal of virality: Blockbusters become the expectation out of the starting gate.

As I said last week, content producers have every intention of addicting their audience, shooting for binge consumption of each new offering. Wolff wrote this book  to be consumed in one sitting.

As futurist (or “futuristorian”) Brad Berens writes, the book is “fascinating in an I-can’t-look-away-at-the-17-car-pileup-with-lots-of-ambulances way.” But there’s usually a price to be paid for going down the sensational path. “Fire and Fury” has all the staying power of a “bag of Cheetos.” Again, Berens hits the nail on the head: “You can measure the relevance of Wolff’s book in half-lives, with each half-life being about a day.”

One of the uncanny things about Donald Trump is that he always out-sensationalizes any attempt to sensationalize him. He is the ultimate “viral” leader, intentionally — or not — the master of the “Fat Head.” Today that head is dedicated to Wolff’s book. Tomorrow, Trump will do something to knock it out of the spotlight.

Social media analytics developer Tom Maiaroto found the average sharing lifespan of viral content is about a day. So while the Fat Head may indeed be Fat, it’s also extremely short-lived. This means that, increasingly, content intended to go viral  — whether it be books, TV shows or movies — is intentionally developed to hit this short but critical window.

So what is the psychology behind virality? What buttons have to be pushed to start the viral cascade?

Wharton Marketing Professor Jonah Berger, who researched what makes things go viral, identified six principles: Social Currency, Memory Triggers, Emotion, Social Proof, Practical Value and Stories. “Fire and Fury” checks almost all these boxes, with the possible exception of practical value.

But it most strongly resonates with social currency, social proof and emotion. For everyone who thinks Trump is a disaster of unprecedented proportions, this book acts as kind of an ideological statement, a social positioner, an emotional rant and confirmation bias all rolled into one. It is a tribal badge in print form.

When we look at the diffusion of content through the market, technology has again acted as a polarizing factor. New releases are pushed toward the outlier extremes, either far down the Long Tail or squarely aimed at cashing in on the Fat Head. And if it’s the latter of these, then going viral becomes critical.

Expect more fire. Expect more fury.

Watching TV Through The Overton Window

Tell me, does anyone else have a problem with this recent statement by HBO CEO Richard Plepler: “I am trying to build addicts — and I want people addicted to something every week”?

I read this in a MediaPost column about a month ago. At the time, I filed it away as something vaguely troubling. I just checked and found no one else had commented on it. Nothing. We all collectively yawned as we checked out the next series to binge watch. That’s just what we do now.

When did enabling addiction become a goal worth shooting for? What made the head of a major entertainment corporation think it was OK to use a term that is defined as “persistent, compulsive use of a substance known to the user to be harmful” to describe a strategic aspiration? And, most troubling of all, when did we all collectively decide that that was OK?

Am I overreacting? Is bulk consuming an entire season’s worth of “Game of Thrones” or “Big Little Lies” over a 48-hour period harmless?

Speaking personally, when I emerge from my big-screen basement cave after watching more than two episodes of anything in a row, I feel like crap. And there’s growing evidence that I’m not alone. I truly believe this is not a healthy direction for us.

But my point here is not to debate the pros and cons of binge watching. My point is that Plepler’s statement didn’t cause any type of adverse reaction. We just accepted it. And that may because of something called the Overton Window.

The Overton Window was named after Joseph Overton, who developed the concept at a libertarian think tank  — the Mackinac Center for Public Policy — in the mid-1990s.

Typically, the term is used to talk about the range of policies acceptable to the public in the world of politics. In the middle of the window lies current policy. Moving out from the center in both directions (right and left) are the degrees of diminishing acceptability. In order, these are: Popular, Sensible, Acceptable, Radical and Unthinkable.

Overton_Window_diagram.svgThe window can move, with ideas that were once unthinkable eventually becoming acceptable or even popular due to the shifting threshold of public acceptance. The concept, which has roots going back over 150 years, has again bubbled to the top of our consciousness thanks to Trumpian politics, which make “extreme things look normal,” according to a post on Vox.

Political strategists have embraced and leveraged the concept to try to bring their own agendas within the ever-moving window. Because here’s the interesting thing about the Overton Window: If you want to move it substantially, the fastest way to do it is to float something outrageous to the public and ask them to consider it. Once you’ve set a frame of consideration towards the outliers, it tends to move the window substantially in that direction, bringing everything less extreme suddenly within the bounds of the window.

This has turned The Overton Window into a political strategic tug of war, with the right and left battling to shift the window by increasingly moving to the extremes.

What’s most intriguing about the Overton Window is how it reinforces the idea that much of our social sensibility is relative rather than absolute. Our worldview is shaped not only by what we believe, but what we believe others will find acceptable. Our perspective is constantly being framed relative to societal norms.

Perhaps — just perhaps — the CEO of HBO can now use the word “addict” when talking about entertainment because our perspective has been shifted toward an outlying idea that compulsive consumption is OK, or even desirable.

But I have to call bullshit on that. I don’t believe it’s OK. It’s not something we as an industry — whether that industry is marketing or entertainment — should be endorsing. It’s not ennobling us; it’s enabling us.

There’s a reason why the word “addict” has a negative connotation. If our “window” of acceptability has shifted to the point where we just blithely accept these types of statements and move on, perhaps it’s time to shift the window in the opposite direction.