Raising an Anti-fragile Brand

I’ve come to realize that brand building is a lot like having kids. Much as you want to, at some point you simply can’t control their lives. All you can do is lay a strong foundation. Then you have to cast them adrift on the vicissitudes of life and hope they bounce in the right direction more often than not. It’s a crapshoot, so you damn well better hedge your bets.

Luck rules a perverse universe. All the planning in the world can’t prevent bad luck. Crappy things happen with astonishing regularity to very organized, competent people. The same is true of brands. Crappy things can happen to good brands at any moment – and all the planning in the world can’t prevent it.

Take October 31, 2017 for instance. On that day, Sayfullo Saipov drove a rented truck down a bike lane on Manhattan’s west side, killing 8 and injuring 11 others. What does this have to do with branding? Saipov rented his truck from Home Depot. All the pictures and video of the incident showed the truck with a huge Home Depot logo on the door. You know the saying that there’s no such thing as bad publicity? Wrong!

Or take August 11, 2017 when a bunch of white supremacists decided to hold a torchlight rally in Charlotteville. Their torch of preference? The iconic Tiki Torch, which, ironically, is based on a decidedly non-white Polynesian design. Tiki soon took to social media to indicate they were not amused with the neo-Nazi’s choice.

The first instinct when things go wrong – with kids or brands – is to want to jump in and exert control. But that doesn’t work very well in either case. You need to build “anti-fragility.” This concept – from Nassim Nicholas Taleb – is when, “shocks and disruptions make you stronger and more creative, better able to adapt to each new challenge you face.” So, in the interest of antifragility – of kids or brands – here are a few things I’ve learned.

Do the Right Thing….

Like the advice from the eponymous 1989 movie from Spike Lee, you should always “Do the Right Thing”. That doesn’t mean being perfect. It just means that when you have a choice between sticking to your principles and taking the easy way out – always do the former. A child raised in this type of environment will follow suit. You have laid a strong moral foundation that will be their support system for the rest of their lives. And the same is true of brands. A brand built on strong ethics, by a company that always tries to do the right thing, is exceptionally anti-fragile. When knocks happen – and cracks inevitably appear – an ethical brand will heal itself. An unethical brand that depends on smoke and mirrors will crumble.

Building an Emotional Bank account

One of the best lessons I’ve ever learned in my life was the metaphor of the emotional bank account from Stephen Covey. My wife and I have tried to pass this along to our children. Essentially, you have to make emotional deposits with those close to you to build up a balance against which you can withdraw when you need to. If you raise kids that make frequent deposits, you know that their friends and family will be there for them when they need them. The degree of anti-fragility in your children is dependent on the strength of their support network. How loyal are their friends and family? Have they built this loyalty through regular deposits in the respective emotional bank accounts?

The same is true for anti-fragile brands. Brands that build loyalty in an authentic way can weather the inevitable storms that will come their way. This goes beyond the cost of switching rationale. Even brands that have you “locked in” today will inevitably lose that grip through the constant removal of marketplace friction through technology and the ever-creeping forces of competition. Emotional bank accounts are called that for a reason – this had to do with emotions, not rationality.

Accepting that Mistakes Happen

One of the hardest things about being a parent is giving your children room to make mistakes. But if you want to raise anti-fragile kids, you have to do this.

The same is true with brands. When things go wrong, we tend to want to exert control, to fix things. In doing so, we have to take control from someone else. In the case of parenting, you take control from your children, along with the opportunity for them to learn how to fix things themselves. In the case of branding, you take control from the market. But in the later case, you don’t take control, because you can’t. You can respond, but you can’t control. It’s a bitter lesson to learn – but it’s a lesson best learned sooner rather than later.

Remember – You’re In This for the Long Run

Raising anti-fragile children means learning about proportionate responses when things go off the rails. The person your child is when they’re 15 is most likely not going to be the person they are when they’re 25. You’re not going to be the same person either. So while you have to be firm when they step out of line, you also have to take care not to destroy the long-term foundations of your relationship. Over reacting can cause lasting damage.

The same is true for brands. The market has a short memory. No matter how bad today may be, if you have an anti-fragile brand, the future will be better. Sometimes it’s just a matter of holding on and riding out the storm.

 

Fat Heads and Long Tails: Living in a Viral World

I, and the rest of the world, bought “Fire and Fury: Inside the Trump White House” last Friday. Forbes reports that in one weekend, it has climbed to the top of the Amazon booklist, and demand for the book is “unprecedented.”

We use that word a lot now. Our world seems to be a launching pad for “unprecedented” events. Nassim Nicholas Taleb’s black swans used to be the exception — that was the definition of  the term. Now they’re becoming the norm. You can’t walk down the street without accidentally kicking one.

Our world is a hyper-connected feedback loop that constantly engenders the “unprecedented”: storms, blockbusters, presidents. In this world, historical balance has disappeared and all bets are off.

One of the many things that has changed is the distribution pattern of culture. In 2006, Chris Anderson wrote the book “The Long Tail,” explaining how online merchandising, digital distribution and improved fulfillment logistics created an explosion of choices. Suddenly, the distribution curve of pretty much everything  — music, books, apps, video, varieties of cheese — grew longer and longer, creating Anderson’s “Long Tail.”

But let’s flip the curve and look at the other end. The curve has not just grown longer. The leading edge of it has also grown on the other axis. Heads are now fatter.

“Fire and Fury” has sold more copies in a shorter period of time than would have ever been possible at any other time in history. That’s partly because of the  same factors that created the Long Tail: digital fulfillment and more efficient distribution. But the biggest factor is that our culture is now a digitally connected echo chamber that creates the perfect conditions for virality. Feeding frenzies are now an essential element of our content marketing strategies.

If ever there was a book written to go viral, it’s “Fire and Fury.” Every page should have a share button. Not surprisingly, given its subject matter,  the book has all the subtlety and nuance of a brick to the head. This is a book built to be a blockbuster.

And that’s the thing about the new normal of virality: Blockbusters become the expectation out of the starting gate.

As I said last week, content producers have every intention of addicting their audience, shooting for binge consumption of each new offering. Wolff wrote this book  to be consumed in one sitting.

As futurist (or “futuristorian”) Brad Berens writes, the book is “fascinating in an I-can’t-look-away-at-the-17-car-pileup-with-lots-of-ambulances way.” But there’s usually a price to be paid for going down the sensational path. “Fire and Fury” has all the staying power of a “bag of Cheetos.” Again, Berens hits the nail on the head: “You can measure the relevance of Wolff’s book in half-lives, with each half-life being about a day.”

One of the uncanny things about Donald Trump is that he always out-sensationalizes any attempt to sensationalize him. He is the ultimate “viral” leader, intentionally — or not — the master of the “Fat Head.” Today that head is dedicated to Wolff’s book. Tomorrow, Trump will do something to knock it out of the spotlight.

Social media analytics developer Tom Maiaroto found the average sharing lifespan of viral content is about a day. So while the Fat Head may indeed be Fat, it’s also extremely short-lived. This means that, increasingly, content intended to go viral  — whether it be books, TV shows or movies — is intentionally developed to hit this short but critical window.

So what is the psychology behind virality? What buttons have to be pushed to start the viral cascade?

Wharton Marketing Professor Jonah Berger, who researched what makes things go viral, identified six principles: Social Currency, Memory Triggers, Emotion, Social Proof, Practical Value and Stories. “Fire and Fury” checks almost all these boxes, with the possible exception of practical value.

But it most strongly resonates with social currency, social proof and emotion. For everyone who thinks Trump is a disaster of unprecedented proportions, this book acts as kind of an ideological statement, a social positioner, an emotional rant and confirmation bias all rolled into one. It is a tribal badge in print form.

When we look at the diffusion of content through the market, technology has again acted as a polarizing factor. New releases are pushed toward the outlier extremes, either far down the Long Tail or squarely aimed at cashing in on the Fat Head. And if it’s the latter of these, then going viral becomes critical.

Expect more fire. Expect more fury.

Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…

Which Me am I — And On Which Network?

I got an email from Strava. If you’re not familiar with it, Strava is a social network for cyclists and runners. As the former, I joined Strava about two years ago.

Here is the email I received:

Your Friends Are on Strava

 Add friends to follow their adventures and get inspired by their workouts

 J. Doe, Somewhere, CA

 “Follow”

 (Note: the personal information has been changed because after preaching about privacy for the last two weeks, I do have to practice what I preach)

Here’s the thing: I’m not friends with Mr. Doe. I met him a few  times on the speaking circuit when we crossed paths. To be brutally honest, J. Doe was a connection I thought would help me grow my business. He was a higher profile speaker than I was. He’d written a book that sold way more copies than mine ever did. I was “friending up” in my networking.

The last time we met each other — several years ago now — I quickly extended a Facebook friends invite. At the time, I — and the rest of the world — was using Facebook as a catch-all bucket for all my social connections: friends, family and the people I was unabashedly stalking in order to make more money. And J. Doe accepted my invite. It gave my ego a nice little boost at the time.

So, according to Facebook, we’re friends. But we’re not — not really. And that became clear when I got the Strava invite. It would have been really weird if I connected with him on Strava, following his adventures and being inspired by his workouts. We just don’t have that type of relationship. There was no social basis for me to make that connection.

I have different social spheres in my life. I have the remnants of my past professional life as an online marketer. I have my passion as a cyclist. I have a new emerging sphere as a fledgling tourism operator. I have my family.

I could go on. I can think of only a handful of people who comfortably lie within two or more of my spheres.

But with social sign-ins (which I used for Strava) those spheres are suddenly mashed together. It’s becoming clear that socially, we are complex creatures with many, many sides.

Facebook would love nothing more than to be the sole supporting platform of our entire social grid. But that works at cross purposes with how humans socialize. It’s not a monolithic, one-size-fits-all thing, but a sprawling landscape cluttered with very distinctive nodes that are haphazardly linked together.

The only common denominator is ourselves, in the middle of that mess. And even we can have surprising variability. The me that loves cycling is a very different guy from the me that wanted to grow my business profile.

This modality is creating an expansion of socially connected destinations.

Strava is a good example of this. Arguably, it provides a way to track my rides. But it also aspires to be the leading community of athletes. And that’s where it runs headlong into the problem of social modality.

Social sign-ins seem to be a win-win-win. For the user, it eases the headache of maintaining an ever-expanding list of user names and passwords. Sure, there’s that momentary lurch in the pit of our stomachs when we get that warning that we’re sharing our entire lives with the proprietors of the new site, but that goes away with just one little click.

For the website owner, every new social sign-in user comes complete with rich new data and access to all his contacts.  Finally, Facebook can sink their talons into us just a little deeper, gathering data from yet one more online outpost.

But like many things that seem beneficial, unintended consequences are part of the package. This is especially true when the third party I’m signing up for is creating his own community.

Is the “me” that wants to become part of this new community the “me” that Facebook thinks I am? Will things get weird when these two social spheres are mashed together?

Because Facebook assumes that I am always me and you are always you, whatever the context, some of us are forced to splinter our online social personas by maintaining multiple profiles. We may have a work profile and a social one.

The person Facebook thinks we are may be significantly different from the person LinkedIn thinks we are.  Keeping our social selves separate becomes a juggling act of ever-increasing proportions.

So why does Facebook want me to always be me?  It’s because of us — and by us, I mean marketers. We love the idea of markets that are universal and targeting that is omniscient. It just makes our lives so much easier. Our lives as marketers, I mean.

As people? Well, that’s another story — but right now, I’m a marketer.

See the problem?

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

When Technology Makes Us Better…

I’m always quick to point out the darker sides of technology. So, to be fair, I should also give credit where credit is due. That’s what today’s column is about. Technology, we collectively owe you one. Why? Because without you, we wouldn’t be slowly chipping away at the massive issue of sexual predation. #Metoo couldn’t have happened without you.

I’ve talked before of Mark Granovetter’s threshold model of crowd behavior. In the past, I’ve used it to explain how it can tip collective behavior towards the negative; turning crowds into mobs. But it can also work the other way; turning crowds into movements. Either way, the threshold model depends on connection and technology makes that connecting possible. What’s more, it makes it possible in a very specific way that is important to understand.

Technological connection is often ideological connection. We connect in ad hoc social networks that center around an idea. We find common ground that is not physical but conceptual. In the process, we forge new social connections that are freed from the typical constraints that introduce friction in the growth of social networks. We create links that are unrestricted by how people look, where they live, how much they earn or what church they worship at. All we need is to find resonance within ideas and we can quickly create a viral wave. The cost of connection is reduced.

This is no way diminishes the courage required to post the #metoo hashtag. I have been in the digital world for almost three decades now and in that time I have met many, many remarkable women. I hope I have judged them as fellow human beings and have treated them as equals. It has profoundly saddened me to see most of them join the #metoo movement in the past few weeks. It has been painful to learn just how pervasive the problem is and to see this light creep into a behavioral basement of which we are becoming more aware. But it is oh-so-necessary. And I must believe that technology and the comfort it affords by letting you know you’re not alone has made it just a little bit easier to type those six characters.

As I have always said – technology erases friction. It breaks down those sticking points that used to allow powerful individuals to exert control. Control is needed to maintain those circles of complicity that allows the Harvey Weinsteins of the world to prey on others. But with technology, all we need is one little crack in that circle to set in motion a chain reaction that blasts it apart.

I believe that the Weinstein example will represent a sea-change moment in how our society views sexual predation. These behaviors are always part of a power game. For it to continue to exist, the perpetrator must believe in their own power and their ability to maintain it. Once the power goes, so does the predation. #Metoo has shown that your power can disappear immediately and permanently if you get publically tagged. “If it happened to Harvey, it could happen to me” may become the new cautionary tale.

But I hope it’s not just the fear of being caught that pushes us to be better. I also hope that we have learned that it’s not okay to tolerate this. In the incredibly raw and honest post of screenwriter Scott Rosenberg, we had our worst fears confirmed: “Everybody f—ing knew!” And everybody who knew is being sucked into the whirlpool of Harvey’s quickly sinking bulk. I have to believe this is tipping the balance in the right direction. We good men (and women) might be less likely to do nothing next time.

Finally, technology has made us better, whether we believe it or not. In 1961, when I was born, Weinstein’s behavior would have been accepted as normal. It would have even been considered laudable in some circles (predominately male circles – granted). As a father of two daughters, I am grateful that that’s not the world we live in today. The locker room mentality that allows the Harvey Weinsteins, Robert Scobles, and Donald Trumps of the world to flourish is being chipped away – #metoo post by #metoo post.

And we have technology to thank for that.

Together We Lie

Humans are social animals. We’ve become this way because – evolutionarily speaking – we do better as a group than individually. But there’s a caveat here. If you get a group of usually honest people together, they’re more likely to lie. Why is this?

Martin Kocher and his colleagues from LMU in Munich set up a study where participants had to watch a video of a single roll of a die and then report on the number that came up. Depending on what they reported, there was a payoff. Researchers asked both individuals and small groups who had the opportunity to chat anonymously with each other before reporting. The result,

“Our findings are unequivocal: People are less likely to lie if they decide on their own.”

Even individuals who answered honestly independently started lying when they got in a group.

The researchers called this a “dishonesty shift.” They blame it on a shifting weight placed on the norm of honesty. Norms are those patterns we have that guide us in our behaviors and beliefs. But those norms may be different individually than they are when we’re part of a group.

“Feedback is the decisive factor. Group-based decision-making involves an exchange of views that may alter the relative weight assigned to the relevant norm.”

Let’s look at how this may play out. Individually, we may default to honesty. We do so because we’re unsure of the consequences of not being honest. But when we get in a group, we start talking to others and it’s easier to rationalize not being honest – “Well, if everyone’s going to lie, I might as well too.”

Why is this important? Because marketing is done in groups, by groups, to groups. The dynamics of group-based ethics are important for us to understand. It could help to explain the most egregious breaches of ethics we see becoming more and more commonplace, either in corporations or in governments.

Four of the seminal studies in psychology and sociology shed further light on why groups tend to shift towards dishonesty. Let’s look at them individually.

In 1955, Solomon Asch showed that even if individually we believe something to be incorrect, if enough people around us have a different option, we’ll go with the group consensus rather than risk being the odd person out. In his famous study, he would surround a subject with “plants” who, when shown cards with three black lines of obviously differing lengths on it, would insist that three lines were equal. The subjects were then asked their opinion. In 75% of the cases, they’d go with the group rather than risk disagreement. As Asch said in his paper – quoting sociologist Gabriel Tarde – “Social man in a somnambulist.” We have about as much independent will as your average sleepwalker.

Now, let’s continue with Stanley Milgram’s Obedience to Authority study, perhaps the most controversial and frightening of the group. When confronted with an authoritative demeanor, a white coat and a clipboard, 63% of the subjects meekly followed directions and delivered what were supposed to be lethal levels of electrical shock to a hapless individual. The results were so disheartening that we’ve been trying to debunk them ever since. But a follow up study by Stanford psychology professor Philip Zimbardo – where subjects were arbitrarily assigned roles as guards and inmates in a mock prison scenario – was even more shocking. We’re more likely to become monsters and abandon our personal ethics when we’re in a group than when we act alone. Whether it’s obedience to authority – as Milgram was trying to prove – or whether it’s social conformity taken to the extreme, we tend to do very bad things when we’re in bad company.

But how do we slip so far so quickly from our own personal ethical baseline? Here’s where the last study I’ll cite can shed a little light. Sociologist Mark Granovetter – famous for his Strength of Weak Ties study – also looked at the viral spreading of behaviors in groups. I’ve talked about this in a previous column, but here’s the short version: If we have the choice between two options, with accompanying social consequences, which option we choose may be driven by social conformity. If we see enough other people around us picking the more disruptive option (i.e. starting a riot) we may follow suit. Even if we all have different thresholds – which we do – the nature of a crowd is such that those with the lowest threshold will pick the disruption option, setting into effect a bandwagon effect that eventually tips the entire group over the threshold.

These were all studied in isolation, because that’s how science works. We study variables in isolation. But it’s when factors combine that we get the complexity that typifies the real world – and the real marketplace. And there’s where predictability goes out the window. The group dynamics in play can create behavioral patterns that make no sense to the average person with the average degree of morality. But it’s happened before, it’s happening now, and it’s sure to happen again.

 

 

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

Is Busy the New Alpha?

Imagine you’ve just been introduced into a new social situation. Your brain immediately starts creating a social hierarchy. That’s what we do. We try to identify the power players. The process by which we do this is interesting. The first thing we do is look for obvious cues. In a new job, that would be titles and positions. Then, the process becomes very Bayesian – we form a base understanding of the hierarchy almost immediately and then constantly update it as we gain more knowledge. We watch power struggles and update our hierarchy based on the winners and losers. We start assigning values to the people in this particular social network and; more importantly, start assessing our place in the network and our odds for ascending in the hierarchy.

All of that probably makes sense to you as you read it. There’s nothing really earth shaking or counter intuitive. But what is interesting is that the cues we use to assign standings are context dependent. They can also change over time. What’s more, they can vary from person to person or generation to generation.

In other words, like most things, our understanding of social hierarchy is in the midst of disruption.

An understanding of hierarchy appears to be hardwired into us. A recent study found that humans can determine social standing and the accumulation of power pretty much as soon as they can walk. Toddlers as young as 17 months could identify the alphas in a group. One of the authors of the study, University of Washington psychology professor Jessica Sommerville , said that even the very young can “see that someone who is more dominant gets more stuff.” That certainly squares with our understanding of how the world works. “More stuff” has been how we’ve determined social status for hundreds of years. In sociology, it’s called conspicuous consumption, a term coined by sociologist Thorstein Veblen. And it’s a signaling strategy that evolved in humans over our recorded history. The more stuff we had, and the less we had to do to get that stuff, the more status we had. Just over a hundred years ago, Veblen called this the Leisure Class.

But today that appears to be changing. A recent study seems to indicate that we now associate busyness with status. Here, it’s time – not stuff – that is the scarce commodity. Social status signaling is more apt to involve complaining about how we never go on a vacation than about our “summer on the continent”.

At least, this seems to be true in the U.S. The researchers also ran their study in Italy and there the situation was reversed. Italians still love their lives of leisure. The U.S. is the only developed country in the world without a single legally required paid vacation day or holiday. In Italy, every employee is entitled to at least 32 paid days off per year.

In our world of marketing – which is acutely aware of social signaling – this could create some interesting shifts in messaging. I think we’re already seeing this. Campaigns aimed at busy people seem to equate scarcity of time with success. The one thing missing in all this social scrambling – whether it be conspicuous consumption or working yourself to death – might be happiness. Last year a study out of the University of British Columbia found a strong link between those who value their time more than money and happiness.

Maybe those Italians are on to something.

Curmudgeon, Chicken Little or Cognoscenti?

Apparently I’m old and out of step. Curmudgeonly, even. And this is from people of my own generation. My previous column about the potential shallowness encouraged by social media drew a few comments that indicated I was just being a grumpy old man. One was from an old industry friend – Brett Tabke:

“The rest of the article is like out of the 70’s in that it is devoid of the reality that is the uber-me generation. The selfie is only a reflection of their inward focus.”

The other was from Monica Emrich, whom I’ve never had the pleasure of meeting:

” ’Social Media Is Barely Skin-Deep.’ ho hum. History shows: when new medium hits, civilization as we know it is over.”

These comments seem to telling me, “Relax. You just don’t understand because you’re too old. Everything will be great.” And, if that’s true, I’d be okay with that. I’m more than willing to be proven a doddering old fool if it means technology is ushering us into a new era of human greatness.

But what if this time is different? What if Monica’s facetious comment actually nailed it? Maybe civilization as we know it will be over. The important part of this is “as we know it.” Every technological disruption unleashes a wave of creative destruction that pushes civilization in a new direction. We seem to blindly assume it will always go in the right direction. And it is true that technology has generally elevated the human race. But not uniformly – and not consistently. What if this shift is different? What if we become less than what we were? It can happen. Brexit – Xenophobia – Trump – Populism, all these things are surfing on the tides of new technology.

Here’s the problem. There are some aspects of technology that we’ve never had to deal with before – at least, not at this scale. One these aspects (other aspects will no doubt be the topic of a future Media Insider) is that technology is now immersive and ubiquitous. It creates an alternate reality for us, and it has done in it in a few short decades. Why is this dangerous? It’s dangerous because evolution has not equipped us to deal with this new reality. In the past, when there has been a shift in our physical reality, it has taken place over several generations. Natural selection had the time to reshape the human genome to survive and eventually thrive in this new reality. Along the way, we acquired checks and balances that would allow us to deal with the potentially negative impacts of the environment.

But our new reality is different. It’s happen in the space of a single generation. There is no way we could have acquired natural defenses against it. We are operating in an environment we have been untested for. The consequences are yet to be discovered.

No, your response might be to say that, “Yes, evolution doesn’t move this quickly, but out brains can. They are elastic and malleable.” This is true, but there’s a big “but” that lies hidden in this approach. Our brains rewire to be a better match their environment. This is one of the things that humans excel at. But this rewiring happens on top of a primitive platform with some built in limitations. The assumption is that a better match with our environment provides a better chance for survival of the species.

But what if technology is throwing us a curve ball in this case? No matter what the environment we have adapted to, there has been one constant: The history of humans depends on our success in living together. We have evolved to be social animals but that evolution is predicated on the assumption that our socializing would take place face-to-face. Technology is artificially decoupling our social interactions from the very definition of society that we have evolved to be able to handle. A recent Wharton interview with Eden Collinsworth sounds the same alarm bells.

“The frontal lobes, which are the part of the brain that puts things in perspective and allows you to be empathetic, are constantly evolving. But it is less likely to evolve and develop those skills if you are in front of a screen. In other words, those skills come into play when you have a face-to-face interaction with someone. You can observe facial gestures. You can hear the intonation of a voice. You’re more likely to behave moderately in that exchange, unless it’s a just a knock-down, drag-out fight.”

Collinsworth’s premise – which is covered in her new book, Behaving Badly – is that this artificial reality is changing our concepts of morality and ethics. She reminds us the two are interlinked, but they are not the same thing. Morality is our own personal code of conduct. Ethics are a shared code that society depends on to instill a general sense of fairness. Collinsworth believes both are largely learned from the context of our culture. And she worries that a culture that is decoupled from the physical reality we have evolved to operate in may have dire consequences.

The fact is that if our morality and ethics are intended to keep us socially more cohesive, this works best in a face-to-face context. In an extreme example of this, Lt. Col. Dave Grossman, a former paratrooper and professor of psychology at West Point, showed how our resistance to killing another human in combat is inversely related to our physical distance from them. The closer we are to them, the more resistant we are to the idea of killing them. This makes sense in an evolutionary environment where all combat was hand-to-hand. But today, the killer could be in a drone flight control center thousands of miles from his or her intended target.

This evolved constraint on unethical behavior – the social check and balance of being physically close to the people we’re engaging with – is important. And while the application of the two examples I’ve cited; One – the self-absorbed behavior on social networks – and Two – the moral landscape of a drone strike operator, may seem magnitudes apart in terms of culpability, the underlying neural machinery is related. What we believe is right and wrong is determined by a moral compass set to the bearings of our environment. The fundamental workings of that compass assumed we would be face-to-face with the people we have to deal with. But thanks to technology, that’s no longer the case.

Maybe Brett and Monica are right. Maybe I’m just being alarmist. But if not, we’d better start paying more attention. Because civilization “as we know it” may be ending.