The Strange Polarity of Facebook’s Moral Compass

For Facebook, 2018 came in like a lion, and went out like a really pissed off  Godzilla with a savagely bad hangover after the Mother of all New Year’s Eve parties.  In other words, it was not a good year.

As Zuckerberg’s 2018 shuddered to its close, it was disclosed that Facebook and Friends had opened our personal data kimonos for any of their “premier” partners. This was in direct violation of their own data privacy policy, which makes it even more reprehensible than usual. This wasn’t a bone-headed fumbling of our personal information. This was a fully intentional plan to financially benefit from that data in a way we didn’t agree to, hide that fact from us and then deliberately lie about it on more than one occasion.

I was listening to a radio interview of this latest revelation and one of the analysts  – social media expert and author Alexandria Samuel – mused about when it was that Facebook lost its moral compass. She has been familiar with the company since its earliest days, having the opportunity to talk to Mark Zuckerberg personally. In her telling, Zuckerberg is an evangelist that had lost his way, drawn to the dark side by the corporate curse of profit and greed.

But Siva Vaidhyanathan – the Robertson Professor of Modern Media Studies at the University of Virgina –  tells a different story. And it’s one that seems much more plausible to me. Zuckerberg may indeed be an evangelist, although I suspect he’s more of a megalomaniac. Either way, he does have a mission. And that mission is not opposed to corporate skullduggery. It fully embraces it. Zuckerberg believes he’s out to change the world, while making a shitload of money along the way. And he’s fine with that.

That came as a revelation to me. I spent a good part of 2018 wondering how Facebook could have been so horrendously cavalier with our personal data. I put it down to corporate malfeasance. Public companies are not usually paragons of ethical efficacy. This is especially true when ethics and profitability are diametrically opposed to each other. This is the case with Facebook. In order for Facebook to maintain profitability with its current revenue model, it has to do things with our private data we’d rather not know about.

But even given the moral vacuum that can be found in most corporate boardrooms, Facebook’s brand of hubris in the face of increasingly disturbing revelations seems off-note – out of kilter with the normal damage control playbook. Vaidhyanathan’s analysis brings that cognitive dissonance into focus. And it’s a picture that is disturbing on many levels.

siva v photo

Siva Vaidhyanathan

According to Vaidhyanathan, “Zuckerberg has two core principles from which he has never wavered. They are the founding tenets of Facebook. First, the more people use Facebook for more reasons for more time of the day the better those people will be. …  Zuckerberg truly believes that Facebook benefits humanity and we should use it more, not less. What’s good for Facebook is good for the world and vice-versa.

Second, Zuckerberg deeply believes that the records of our interests, opinions, desires, and interactions with others should be shared as widely as possible so that companies like Facebook can make our lives better for us – even without our knowledge or permission.”

Mark Zuckerberg is not the first tech company founder to have a seemingly ruthless god complex and a “bigger than any one of us” mission. Steve Jobs, Bill Gates, Larry Page, Larry Ellison; I could go on. What is different this time is that Zuckerberg’s chosen revenue model runs completely counter to the idea of personal privacy. Yes, Google makes money from advertising, but the vast majority of that is delivered in response to a very intentional and conscious request on the part of the user. Facebook’s gaping vulnerability is that it can only be profitable by doing things of which we’re unaware. As Vaidhyanathan says, “violating our privacy is in Facebook’s DNA.”

Which all leads to the question, “Are we okay with that?” I’ve been thinking about that myself. Obviously, I’m not okay with it. I just spent 720 words telling you so. But will I strip my profile from the platform?

I’m not sure. Give me a week to think about it.

Is Google Politically Biased?

As a company, the answer is almost assuredly yes.

But are the search results biased? That’s a much more nuanced question.

Sundar Pinchai testifying before congress

In trying to answer that question last week, Google CEO Sundar Pinchai tried to explain how Google’s algorithm works to Congress’s House Judiciary Committee (which kind of like God explaining how the universe works to my sock, but I digress). One of the catalysts for this latest appearance of a tech was another one of President Trump’s ranting tweets that intimated something was rotten in the Valley of the Silicon:

Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal? 96% of … results on ‘Trump News’ are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

Granted, this tweet is non-factual, devoid of any type of evidence and verging on frothing at the mouth. As just one example, let’s take the 96% number that Trump quotes in the above tweet. That came from a very unscientific straw poll that was done by one reporter on a far right-leaning site called PJ Media. In effect, Trump did exactly what he accuses of Google doing – he cherry-picked his source and called it a fact.

But what Trump has inadvertently put his finger on is the uneasy balance that Google tries to maintain as both a search engine and a publisher. And that’s where the question becomes cloudy. It’s a moral precipice that may be clear in the minds of Google engineers and executives, but it’s far from that in ours.

Google has gone on the record as ensuring their algorithm is apolitical. But based on a recent interview with Google News head Richard Gingras, there is some wiggle room in that assertion. Gingras stated,

“With Google Search, Google News, our platform is the open web itself. We’re not arbiters of truth. We’re not trying to determine what’s good information and what’s not. When I look at Google Search, for instance, our objective – people come to us for answers, and we’re very good at giving them answers. But with many questions, particularly in the area of news and public policy, there is not one single answer. So we see our role as [to] give our users, citizens, the tools and information they need – in an assiduously apolitical fashion – to develop their own critical thinking and hopefully form a more informed opinion.”

But –  in the same interview – he says,

“What we will always do is bias the efforts as best we can toward authoritative content – particularly in the context of breaking news events, because major crises do tend to attract the bad actors.”

So Google does boost news sites that it feels are reputable and it’s these sites – like CNN –  that typically dominate in the results. Do reputable news sources tend to lean left? Probably. But that isn’t Google’s fault. That’s the nature of Open Web. If you use that as your platform, you build in any inherent biases. And the minute you further filter on top of that platform, you leave yourself open to accusations of editorializing.

There is another piece to this puzzle. The fact is that searches on Google are biased, but that bias is entirely intentional. The bias in this case is yours. Search results have been personalized so that they’re more relevant to you. Things like your location, your past search history, the way you structure your query and a number of other signals will be used by Google to filter the results you’re shown. There is no liberal conspiracy. It’s just the way that the search algorithm works. In this way, Google is prone to the same type of filter-bubble problem that Facebook has.  In another interview with Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, he touches on this:

“I was struck by the idea that whereas those arguments seem to work as late as only just a few years ago, they’re increasingly ringing hollow, not just on the side of the conservatives, but also on the liberal side of things as well. And so what I think we’re seeing here is really this view becoming mainstream that these platforms are in fact not neutral, and that they are not providing some objective truth.”

The biggest challenge here lies not in the reality of what Google is or how it works, but in what our perception of Google is. We will never know the inner workings of the Google algorithm, but we do trust in what Google shows us. A lot. In our own research some years ago, we saw a significant lift in consumer trust when brands showed up on top of search results. And this effect was replicated in a recent study that looked at Google’s impact on political beliefs. This study found that voter preferences can shift by as much as 20% due to biased search rankings – and that effect can be even higher in some demographic groups.

If you are the number one channel for information, if you manipulate the ranking of the information in any way and if you wield the power to change a significant percentage of minds based on that ranking – guess what? You are the arbitrator of truth. Like it or not.

Why Marketing is Increasingly Polarizing Everything

 

Trump. Kanye. Kaepernick. Miracle Whip.

What do these things all have in common? They’re polarizing. Just the mention of them probably stirs up strong feelings in you, one way or the other.

Wait. Miracle Whip?

Yep. Whether you love or hate Miracle Whip is perhaps the defining debate of our decade.

Okay, maybe not. But it turns out that Miracle Whip – which I always thought of as the condiment counterpart to vanilla – is a polarized brand, according to an article in the Harvard Business Review.  And far from being aghast at the thought, Kraft Foods, the maker of Miracle Whip, embraced the polarization with gusto. They embedded it in their marketing.

I have to ask – when did it become a bad thing to be vanilla? I happen to like vanilla. But I always order something else. And there’s the rub. Vanilla is almost never our first choice, because we don’t like to be perceived as boring.

Boring is the kiss of death for marketing. So even Miracle Whip, which is literally “boring” in a jar, is trying to “whip” up some controversy. Our country is being split down the middle and driven to either side – shoved to margins of outlier territory. Outrageous is not only acceptable. It’s become desirable. And marketing is partly to blame.

We marketers are enamored with this idea of “viralness.” We want advertising to be amplified through our target customer’s social networks. Boring never gets retweeted or shared. We need to be jolted out of those information filters we have set on high alert. That’s why polarization works. By moving to extremes, brands catch our attention. And as they move to extremes, they drag us along with them. Increasingly, the brands we chose as our own identifying badges are moving away from any type of common middle ground. Advertising is creating a nation of ideological tribes that have an ever-increasing divide separating them.

The problem is that polarization works. Look at Nike. As Sarah Mahoney recently documented in a Mediapost article, the Colin Kaepernick campaign turned some impressive numbers for Nike. Research from Kantar Millward Brown found these ads were particularly effective in piercing our ennui. The surprising part is that it did it on both sides of the divide. Based on Kantar’s Link evaluation, the ad scored in the top 15% of ads on something called “Power Contribution.” According to Kantar, that’s the ad’s “potential to impact long-term equity.” If we strip away the “market-speak” from this, that basically means the Kaepernick ads make them an excellent tribal badge to rally around.

If you’re a marketer, it’s hard to argue with those numbers. And Is it really important if half the world loves a brand, and the other half hates it? I suspect it is. The problem comes when we look at exactly the same thing Kantar’s Link Evaluation measures – what is the intensity of feeling you have towards a brand? The more intense the feeling, the less rational you are. And if the object of your affection lies in outlier territory – those emotions can become highly confrontational towards those on the other side of the divide. Suddenly, opinions become morals, and there is no faster path to hate than embracing a polarized perspective on morality. The more that emotionally charged marketing pushes us towards the edges, the harder it is to respect opinions that are opposed to our own. This embracing of polarization in non-important areas – like which running shoes you choose to wear – increases polarization in other areas where it’s much more dangerous. Like politics.

As if we haven’t seen enough evidence of this lately, polarized politics can cripple a country. In a recent interview on NPR, Georgia State political science professor Jennifer McCoy listed three possible outcomes from polarization. First, the country can enter polarization gridlock, where nothing can get done because there is a complete lack of trust between opposing parties. Secondly, a polarization pendulum can occur, where power swings back and forth between the two sides and most of the political energy is expended undoing the initiatives of the previous government. Often there is little logic to this, other than the fact that the initiatives were started by “them” and not “us.” Finally, one side can find a way to stay in power and then actively work to diminish and vanquish the other side by dismantling democratic platforms.

Today, as you vote, you’ll see ample evidence of the polarization of America. You’ll also see that at least two of the three outcomes of polarization are already playing out. We marketers just have to remember that while we love it when a polarized brand goes viral, there may be another one of those intended consequences lurking in the background.

 

 

Our Trust Issues with Advertising Based Revenue Models

Facebook’s in the soup again. They’re getting their hands slapped for tracking our location. And I have to ask, why is anyone surprised they’re tracking our location? I’ve said this before, but I’ll say it again. What is good for us is not good for Facebook’s revenue model. And vice versa. Social platforms should never be driven by advertising. Period. Advertising requires targeting. And when you combine prospect targeting and the digital residue of our online activities, bad things are bound to happen. It’s inevitable, and it’s going to get worse. Facebook’s future earnings absolutely dictate that they have to try to get us to spend more time on their platform and they have to be more invasive about tracking what we do with that time. Their walled data garden and their reluctance to give us a peak at what’s happening inside should be massive red flags.

Our social activities are already starting to fragment across multiple platforms – and multiple accounts within each of those platforms. We are socially complex people and it’s naïve to think that all that complexity could be contained within any one ecosystem – even one as sprawling as Facebook’s.  In our real lives – you know – the life you lead when you’re not staring at your phone – our social activities are as varied as our moods, our activities, our environment and the people we are currently sharing that environment with. Being social is not a single aspect of our lives. It is the connecting tissue of all that we are. It binds all the things we do into a tapestry of experience. It reflects who we are and our identities are shaped by it. Even when we’re alone, as I am while writing this column, we are being social. I am communicating with each of you and the things I am communicated are shaped by my own social experiences.

My point here is that being social is not something we turn on and off. We don’t go somewhere to be social. We are social. To reduce social complexity and try to contain it within an online ecosystem is a fool’s errand. Trying to support it with advertising just makes it worse. A revenue model based on advertising is self-limiting. It has always been a path of least resistance, which is why it’s so commonly used. It places no financial hurdles on the path to adoption. We have never had to pay money to use Facebook, or Instagram, or Snapchat. But we do pay with our privacy. And eventually, after the inevitable security breaches, we also lose our trust. That lack of trust limits the effectiveness of any social medium.

Of course, it’s not just social media that suffers from the trust issues that come with advertising-based revenue. This advertising driven path has worked up to now because trust was never really an issue. We took comfort in our perceived anonymity in the eyes of the marketer. We were part of a faceless, nameless mass market that traded attention for access to information and entertainment. Advertising works well with mass. As I mentioned, there are no obstacles to adoption. It was the easiest way to assemble the biggest possible audience. But we now market one to one. And as the ones on the receiving end, we are now increasingly seeking functionality. That is a fundamentally different precept. When we seek to do things, rather than passively consume content, we can no longer remain anonymous. We make choices, we go places, we buy stuff, we do things. In doing this, we leave indelible footprints which are easy to track and aggregate.

Our online and offline lives have now melded to the point where we need – and expect – something more than a collection of platforms offering fragmented functionality. What we need is a highly personalized OS, a foundational operating system that is intimately designed just for us and connects the dots of functionality. This is already happening in bits and pieces through the data we surrender when we participate in the online world. But that data lives in thousands of different walled gardens, including the social platforms we use. Then that data is used to target advertising to us. And we hate advertising. It’s a fundamentally flawed contract that we will – given a viable alternative – opt out of. We don’t trust the social platforms we use and we’re right not to. If we had any idea of depth or degree of personal information they have about us, we would be aghast.  I have said before that we are willing to trade privacy for functionality and I still believe this. But once our trust has been broken, we are less willing to surrender that private data, which is essential to the continued profitability of an ad-supported platform.

We need to own our own data. This isn’t so much to protect our privacy as it is to build a new trust contract that will allow that data to be used more effectively for our own purposes and not that of a corporation whose only motive is to increase their own profit. We need to remove the limits imposed by a flawed functionality offering based on serving ads that we don’t want to us. If we’re looking for the true disruptor in advertising, that’s it in nutshell.

 

Who Should (or Could) Protect Our Data?

Last week, when I talked about the current furor around the Cambridge Analytica scandal, I said that part of the blame – or at least, the responsibility – for the protection of our own data belonged to us. Reader Chuck Lantz responded with:

“In short, just because a company such as FaceBook can do something doesn’t mean they should.  We trusted FaceBook and they took advantage of that trust. Not being more careful with our own personal info, while not very wise, is not a crime. And attempting to dole out blame to both victim and perpetrator ain’t exactly wise, either.”

Whether it’s wise or not, when it comes to our own data, there are only three places we can reasonably look to protect it:

A) The Government

One only has to look at the supposed “grilling” of Zuckerberg by Congress to realize how forlorn a hope this is. In a follow up post, Wharton ran a list of the questions that Congress should have asked, compiled from their own faculty. My personal favorite comes from Eric Clemons, professor of Operations, Information and Decisions:

“You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?”

Technology has left the regulatory ability to control it in the dust. The EU is probably the most aggressive legislative jurisdiction in the world when it comes to protecting data privacy. The General Data Protection Regulation goes into place on May 25 of this year and incorporates sweeping new protections for EU citizens. But it will inevitably come up short in three key areas:

  • Even though it immediately applies to all countries processing the data of EU citizens, international compliance will be difficult to enforce consistently, especially if that processing extends beyond “friendly” countries.
  • Technological “loopholes” will quickly find vulnerable gray areas in the legislation that will lead to the misuse of data. Technology will always move faster than legislation. As an example, the GDPR and blockchain technologies are seemingly on a collision course.
  • Most importantly, the GDPR regulation is aimed at data “worst case scenarios.” But there are many apparently benign applications that can border on misuse of personal data. In trying to police even the worst-case instances, the GDPR requires restrictions that will directly impact users in the area of convenience and functionality. There are key areas such as data portability that aren’t fully addressed in the new legislation. At the end of the day, even though it’s protecting them, users will find the GDPR a pain in the ass.

Even with these fundamental flaws, the GDPR probably represents the world’s best attempt at data regulation. The US, as we’ve seen in the past week, comes up well short of this. And even if the people involved weren’t doddering old technologically inept farts the mechanisms required for the passing of relevant and timely legislation simply aren’t there. It would be like trying to catch a jet with a lasso. Should this be the job of government? Sure, I can buy that. Can government handle the job? Not based on the evidence we currently have available to us.

B) The companies that aggregate and manipulate our data.

Philosophically, I completely agree with Chuck. Like I said last week – the point of view I took left me ill at ease. We need these companies to be better than they are. We certainly need them to be better than Facebook was. But Facebook has absolutely no incentive to be better. And my fellow Media Insider, Kaila Colbin, nailed this in her column last week:

“Facebook doesn’t benefit if you feel better about yourself, or if you’re a more informed, thoughtful person. It benefits if you spend more time on its site, and buy more stuff. Giving the users control over who sees their posts offers the illusion of individual agency while protecting the prime directive.”

There are no inherent, proximate reasons for companies to be moral. They are built to be profitable (which, by the way, is why governments should never be run like a company). Facebook’s revenue model is directly opposed to personal protection of data. And that is why Facebook will try to weather this storm by implementing more self-directed security controls to put a good face on things. We will ignore those controls, because it’s a pain in the ass to do otherwise. And this scenario will continue to play out again and again.

C) Ourselves.

It sucks that we have to take this into our own hands. But I don’t see an option. Unless you see something in the first two alternatives that I don’t see, I don’t think we have any choice but to take responsibility. Do you want to put your security in the hands of the government, or Facebook? The first doesn’t have the horsepower to do the job and the second is heading in the wrong direction.

So if the responsibility ends up being ours, what can we expect?

A few weeks ago, another fellow Insider, Dave Morgan, predicted the moats around the walled gardens of data collectors like Facebook will get deeper. But the walled garden approach is not sustainable in the long run. All the market forces are going against it. As markets mature, they move from siloes to open markets. The marketplace of data will head in the same direction. Protectionist measures may be implemented in the short term, but they will not be successful.

This doesn’t negate the fact that the protection of personal information has suddenly become a massive pain point, which makes it huge market opportunity. And like almost all truly meaningful disruptions in the marketplace, I believe the ability to lock down our own data will come from entrepreneurialism. We need a solution that guarantees universal data portability while at the same time maintaining control without putting an unrealistic maintenance burden on us. Rather than having the various walled gardens warehouse our data, we should retain ownership and it should only be offered to platforms like Facebook on a case-by-case “need to know” transactional basis. Will it be disruptive to the current social eco-system? Absolutely. And that’s a good thing.

The targeting of advertising is not a viable business model for the intertwined worlds of social connection and personal functionality. There is just too much at stake here. The only way it can work is for the organization doing the targeting to retain ownership of the data used for the targeting. And we should not trust them to do so in an ethical manner. Their profitability depends on them going beyond what is – or should be – acceptable to us.

The Pillorying of Zuckerberg

Author’s Note: When I started this column I thought I agreed with the views stated. And I still do, mostly. But by the time I finished it, there was doubt niggling at me. It’s hard when you’re an opinion columnist who’s not sure you agree with your own opinion. So here’s what I decided to do. I’m running this column as I wrote it. Then, next week, I’m going to write a second column rebutting some of it.

Let’s face it. We love it when smart asses get theirs. For example: Sir Martin Sorrell. Sorry your lordship but I always thought you were a pontificating and pretentious dickhead and I’m kind of routing for the team digging up dirt on you. Let’s see if you doth protest too much.

Or Jeff Bezos. Okay, granted Trump doesn’t know what the hell he’s talking about regarding Amazon. And we apparently love the company. But just how much sympathy do we really have for the world’s richest man? Couldn’t he stand to be taken down a few pegs?

Don’t get me started on Bill Gates.

But the capo di tutti capi of smart-asses is Mark Zuckerberg. As mad as we are about the gushing security leak that has sprung on his watch, aren’t we all a little bit schaudenfreude-ish as we watch the public flailing that is currently playing out? It’s immensely satisfying to point a finger of blame and it’s doubly so to point it at Mr. Zuckerberg.

Which finger you use I’ll leave to your discretion.

But here’s the thing. As satisfying as it is to make Mark our scapegoat, this problem is systemic. It’s not the domain of one man, or even one company. I’m not absolving Facebook and it’s founder from blame. I’m just spreading it around so it’s a little more representatively distributed. And as much as we may hate to admit it, some of that blame ends up on our plate. We enabled the system that made this happen. We made personal data the new currency of exchange. And now we’re pissed off because there were exchanges made without our knowledge. It all comes down to this basic question: Who owns our data?

This is the fundamental question that has to be resolved. Up to now, we’ve been more than happy to surrender our data in return for the online functionality we need to pursue trivial goals. We rush to play Candy Crush and damn the consequences. We have mindlessly put our data in the hands of Facebook without any clear boundaries around what was and wasn’t acceptable for us.

If we look at data as a new market currency, our relationship with Facebook is really no different than that of a bank when we deposit our money in a bank account and allowing the bank to use our money for their own purposes in return for paying us interest. This is how markets work. They are complicated and interlinked and the furthest thing possible from being proportionately equitable.

Personal Data is a big industry. And like any industry, there is a value chain emerging. We are on the bottom of that chain. We supply the raw data. It is no coincidence that terms like “mining,” “scraping” and “stripping” are used when we talk about harvesting data. The digital trails of our behaviors and private thoughts are a raw resource that has become incredibly valuable. And Facebook just happens to be strategically placed in the market to reap the greatest rewards. They add value by aggregating and structuring the data. Advertisers then buy prepackaged blocks of this data to target their messaging. The targeting that Facebook can provide – thanks to the access they have to our data – is superior to what was available before. This is a simple supply and demand equation. Facebook was connecting the supply – coming from our willingness to surrender our personal data – with the demand – advertisers insisting on more intrusive and personal targeting criteria. It was a market opportunity that emerged and Facebook jumped on it. The phrase “don’t hate the player, hate the game” comes to mind.

When new and untested markets emerge, all goes well until it doesn’t. Then all hell breaks loose. Just like it did with Cambridge Analytica. When that happens, our sense of fairness kicks in. We feel duped. We rush to point fingers. We become judgmental, but everything is done in hindsight. This is all reaction. We have to be reactive, because emerging markets are unpredictable. You can’t predict something like Cambridge Analytica. If it wasn’t them – if it wasn’t this – it would have been something else that would have been equally unpredictable. The emerging market of data exchange virtually guaranteed that hell would eventually break loose. As a recent post on Gizmodo points out,

“the kind of data acquisition at the heart of the Cambridge Analytica scandal is more or less standard practice for every other technology company, including places like Google and even Apple. Facebook simply had the misfortune of getting caught after playing fast and loose with who has control over their data.”

To truly move forward from this, we all have to ask ourselves some hard questions. This is not restricted to Mark Zuckerberg and Facebook. It’s symptomatic of a much bigger issue. And we, the ground level source of this data, will be doing ourselves a disservice in the long run by trying to isolate the blame to any one individual or company. In a very real sense, this is our problem. We are part of a market dynamic that is untested and – as we’ve seen – powerful enough to subvert democracy. Some very big changes are required in the way we treat our own data. We owe it to ourselves to be part of that process.

Raising an Anti-fragile Brand

I’ve come to realize that brand building is a lot like having kids. Much as you want to, at some point you simply can’t control their lives. All you can do is lay a strong foundation. Then you have to cast them adrift on the vicissitudes of life and hope they bounce in the right direction more often than not. It’s a crapshoot, so you damn well better hedge your bets.

Luck rules a perverse universe. All the planning in the world can’t prevent bad luck. Crappy things happen with astonishing regularity to very organized, competent people. The same is true of brands. Crappy things can happen to good brands at any moment – and all the planning in the world can’t prevent it.

Take October 31, 2017 for instance. On that day, Sayfullo Saipov drove a rented truck down a bike lane on Manhattan’s west side, killing 8 and injuring 11 others. What does this have to do with branding? Saipov rented his truck from Home Depot. All the pictures and video of the incident showed the truck with a huge Home Depot logo on the door. You know the saying that there’s no such thing as bad publicity? Wrong!

Or take August 11, 2017 when a bunch of white supremacists decided to hold a torchlight rally in Charlotteville. Their torch of preference? The iconic Tiki Torch, which, ironically, is based on a decidedly non-white Polynesian design. Tiki soon took to social media to indicate they were not amused with the neo-Nazi’s choice.

The first instinct when things go wrong – with kids or brands – is to want to jump in and exert control. But that doesn’t work very well in either case. You need to build “anti-fragility.” This concept – from Nassim Nicholas Taleb – is when, “shocks and disruptions make you stronger and more creative, better able to adapt to each new challenge you face.” So, in the interest of antifragility – of kids or brands – here are a few things I’ve learned.

Do the Right Thing….

Like the advice from the eponymous 1989 movie from Spike Lee, you should always “Do the Right Thing”. That doesn’t mean being perfect. It just means that when you have a choice between sticking to your principles and taking the easy way out – always do the former. A child raised in this type of environment will follow suit. You have laid a strong moral foundation that will be their support system for the rest of their lives. And the same is true of brands. A brand built on strong ethics, by a company that always tries to do the right thing, is exceptionally anti-fragile. When knocks happen – and cracks inevitably appear – an ethical brand will heal itself. An unethical brand that depends on smoke and mirrors will crumble.

Building an Emotional Bank account

One of the best lessons I’ve ever learned in my life was the metaphor of the emotional bank account from Stephen Covey. My wife and I have tried to pass this along to our children. Essentially, you have to make emotional deposits with those close to you to build up a balance against which you can withdraw when you need to. If you raise kids that make frequent deposits, you know that their friends and family will be there for them when they need them. The degree of anti-fragility in your children is dependent on the strength of their support network. How loyal are their friends and family? Have they built this loyalty through regular deposits in the respective emotional bank accounts?

The same is true for anti-fragile brands. Brands that build loyalty in an authentic way can weather the inevitable storms that will come their way. This goes beyond the cost of switching rationale. Even brands that have you “locked in” today will inevitably lose that grip through the constant removal of marketplace friction through technology and the ever-creeping forces of competition. Emotional bank accounts are called that for a reason – this had to do with emotions, not rationality.

Accepting that Mistakes Happen

One of the hardest things about being a parent is giving your children room to make mistakes. But if you want to raise anti-fragile kids, you have to do this.

The same is true with brands. When things go wrong, we tend to want to exert control, to fix things. In doing so, we have to take control from someone else. In the case of parenting, you take control from your children, along with the opportunity for them to learn how to fix things themselves. In the case of branding, you take control from the market. But in the later case, you don’t take control, because you can’t. You can respond, but you can’t control. It’s a bitter lesson to learn – but it’s a lesson best learned sooner rather than later.

Remember – You’re In This for the Long Run

Raising anti-fragile children means learning about proportionate responses when things go off the rails. The person your child is when they’re 15 is most likely not going to be the person they are when they’re 25. You’re not going to be the same person either. So while you have to be firm when they step out of line, you also have to take care not to destroy the long-term foundations of your relationship. Over reacting can cause lasting damage.

The same is true for brands. The market has a short memory. No matter how bad today may be, if you have an anti-fragile brand, the future will be better. Sometimes it’s just a matter of holding on and riding out the storm.