The Unusual Evolution of the Internet

The Internet we have today evolved out of improbability. It shouldn’t have happened like it did. It evolved as a wide-open network forged by starry-eyed academics and geeks who really believed it might make the world better. It wasn’t supposed to win against walled gardens like Compuserve, Prodigy and AOL — but it did. If you rolled back the clock, knowing what we know now, you could be sure it would never play out the same way again.

To use the same analogy that Eric Raymond did in his now-famous essay on the development of Linux, these were people who believed in bazaars rather than cathedrals. The internet was cobbled together to scratch an intellectual and ethical itch, rather than a financial one.

But today, as this essay in The Atlantic by Jonathan Zittrain warns us, the core of the internet is rotting. Because it was built by everyone and no one, all the superstructure that was assembled on top of that core is teetering. Things work, until they don’t: “The internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks.”

The problem is, it’s no one’s job to make sure those bricks stay in place.

Zittrain talks about the holes in humanity’s store of knowledge. But there’s another thing about this evolution that is either maddening or magical, depending on your perspective: It was never built with a business case in mind.

Eventually, commerce pipes were retrofitted into the whole glorious mess, and billions managed to be made. Google alone has managed to pull over a trillion dollars in revenue in less than 20 years by becoming the de facto index to the world’s most haphazard library of digital stuff. Amazon went one better, using the Internet to reinvent humanity’s marketplace and pulling in $2 trillion in revenue along the way.

But despite all this massive monetization, the benefactors still at least had to pay lip service to that original intent: the naïve belief that technology could make us better, and  that it didn’t just have to be about money.

Even Google, which is on its way to posting $200 billion in revenue, making it the fifth biggest media company in the world (after Netflix, Disney, Comcast, and AT&T), stumbled on its way to making a buck. Perhaps it’s because its founders, Larry Page and Sergey Brin, didn’t trust advertising. In their original academic paper, they said that “advertising-funded search engines will inherently be biased toward the advertisers and away from the needs of consumers.”  Of course they ultimately ended up giving in to the dark side of advertising. But I watched the Google user experience closely from 2003 to 2011, and that dedication to the user was always part of a delicate balancing act that was generally successful.

But that innocence of the original Internet is almost gone, as I noted in a recent post. And there are those who want to make sure that the next thing — whatever it is — is built on a framework that has monetization built in. It’s why Mark Zuckerberg is feverishly hoping that his company can build the foundations of the Metaverse. It’s why Google is trying to assemble the pipes and struts that build the new web. Those things would be completely free of the moral — albeit naïve — constraints that still linger in the original model. In the new one, there would only be one goal: making sure shareholders are happy.

It’s also natural that many of those future monetization models will likely embrace advertising, which is, as I’ve said before, the path of least resistance to profitability.

We should pay attention to this. The very fact that the Internet’s original evolution was as improbable and profit-free as it was puts us in a unique position today. What would it look like if things had turned out differently, and the internet had been profit-driven from day one? I suspect it might have been better-maintained but a lot less magical, at least in its earliest iterations.

Whatever that new thing is will form a significant part of our reality. It will be even more foundational and necessary to us than the current internet. We won’t be able to live without it. For that reason, we should worry about the motives that may lie behind whatever “it” will be.

The Relationship between Trust and Tech: It’s Complicated

Today, I wanted to follow up on last week’s post about not trusting tech companies with your privacy. In that post, I said, “To find a corporation’s moral fiber, you always, always, always have to follow the money.”

A friend from back in my industry show days — the always insightful Brett Tabke — reached out to me to comment, and mentioned that the position taken by Apple in the current privacy brouhaha with Facebook is one of convenience, especially this “holier-than-thou” privacy stand adopted by Tim Cook and Apple.

“I really wonder though if it is a case of do-the-right-thing privacy moral stance, or one of convenience that supports their ecosystem, and attacks a competitor?” he asked.

It’s hard to argue against that. As Brett mentioned, Apple really can’t lose by “taking money out of a side-competitors pocket and using it to lay more foundational corner stones in the walled garden, [which] props up the illusion that the garden is a moral feature, and not a criminal antitrust offence.”

But let’s look beyond Facebook and Apple for a moment. As Brett also mentioned to me, “So who does a privacy action really impact more? Does it hit Facebook or ultimately Google? Facebook is just collateral damage here in the real war with Google. Apple and Google control their own platform ecosystems, but only Google can exert influence over the entire web. As we learned from the unredacted documents in the States vs Google antitrust filings, Google is clearly trying to leverage its assets to exert that control — even when ethically dubious.”

So, if we are talking trust and privacy, where is Google in this debate? Given the nature of Google’s revenue stream, its stand on privacy is not quite as blatantly obvious (or as self-serving) as Facebook’s. Both depend on advertising to pay the bills, but the nature of that advertising is significantly different.

57% of Alphabet’s (Google’s parent company) annual $182-billion revenue stream still comes from search ads, according to its most recent annual report. And search advertising is relatively immune from crackdowns on privacy.

When you search for something on Google, you have already expressed your intent, which is the clearest possible signal with which you can target advertising. Yes, additional data taken with or without your knowledge can help fine-tune ad delivery — and Google has shown it’s certainly not above using this  — but Apple tightening up its data security will not significantly impair Google’s ability to make money through its search revenue channel.

Facebook’s advertising model, on the other hand, targets you well before any expression of intent. For that reason, it has to rely on behavioral data and other targeting to effectively deliver those ads. Personal data is the lifeblood of such targeting. Turn off the tap, and Facebook’s revenue model dries up instantly.

But Google has always had ambitions beyond search revenue. Even today, 43% of its revenue comes from non-search sources. Google has always struggled with the inherently capped nature of search-based ad inventory. There are only so many searches done against which you can serve advertising. And, as Brett points out, that leads Google to look at the very infrastructure of the web to find new revenue sources. And that has led to signs of a troubling collusion with Facebook.

Again, we come back to my “follow the money” mantra for rooting out rot in the system. And in this case, the money we’re talking about is the premium that Google skims off the top when it determines which ads are shown to you. That premium depends on Google’s ability to use data to target the most effective ads possible through its own “Open Bidding” system. According to the unredacted documents released in the antitrust suit, that premium can amount to 22% to 42% of the ad spend that goes through that system.

In summing up, it appears that if you want to know who can be trusted most with your data, it’s the companies that don’t depend on that data to support an advertising revenue model. Right now, that’s Apple. But as Brett also pointed out, don’t mistake this for any warm, fuzzy feeling that Apple is your knight in shining armour: “Apple has shown time and time again they are willing to sacrifice strong desires of customers in order to make money and control the ecosystem. Can anyone look past headphone jacks, Macbook jacks, or the absence of Macbook touch screens without getting the clear indication that these were all robber-baronesque choices of a monopoly in action? Is so, then how can we go ‘all in’ on privacy with them just because we agree with the stance?”

The Tech Giant Trust Exercise

If we look at those that rule in the Valley of Silicon — the companies that determine our technological future — it seems, as I previously wrote,  that Apple alone is serious about protecting our privacy. 

MediaPost editor in chief Joe Mandese shared a post late last month about how Apple’s new privacy features are increasingly taking aim at the various ways in which advertising can be targeted to specific consumers. The latest victim in those sights is geotargeting.

Then Steve Rosenbaum mentioned last week that as Apple and Facebook gird their loins and prepare to do battle over the next virtual dominion — the metaverse — they are taking two very different approaches. Facebook sees this next dimension as an extension of its hacker mentality, a “raw, nasty networker of spammers.” Apple is, as always, determined to exert a top-down restriction on who plays in its sandbox, only welcoming those who are willing to play by its rules. In that approach, the company is also signaling that it will take privacy in the metaverse seriously. Apple CEO Tim Cook said he believes “users should have the choice over the data that is being collected about them and how it’s used.”

Apple can take this stand because its revenue model doesn’t depend on advertising. To find a corporation’s moral fiber, you always, always, always have to follow the money. Facebook depends on advertising for revenue. And it has repeatedly shown it doesn’t really give a damn about protecting the privacy of users. Apple, on the other hand, takes every opportunity to unfurl the privacy banner as its battle standard because its revenue stream isn’t really impacted by privacy.

If you’re looking for the rot at the roots of technology, a good place to start is at anything that relies on advertising. In my 40 years in marketing, I have come to the inescapable conclusion that it is impossible for business models that rely on advertising as their primary source of revenue to stay on the right side of privacy concerns. There is an inherent conflict that cannot be resolved. In a recent earnings call,  Facebook CEO Mark Zuckerberg said it in about the clearest way it could be said, “As expected, we did experience revenue headwinds this quarter, including from Apple’s [privacy rule] changes that are not only negatively affecting our business, but millions of small businesses in what is already a difficult time for them in the economy.”

Facebook has proven time and time again that when the need for advertising revenue runs up against a question of ethical treatment of users, it will always be the ethics that give way.

It’s also interesting that Europe is light years ahead of North America in introducing legislation that protects privacy. According to one Internet Privacy Ranking study, four of the five top countries for protecting privacy are in Northern Europe. Australia is the fifth. My country, Canada, shares these characteristics. We rank seventh. The US ranks 18th.

There is an interesting corollary here I’ve touched on before. All these top-ranked countries are social democracies. All have strong public broadcasting systems. All have a very different relationship with advertising than the U.S. We that live in these countries are not immune from the dangers of advertising (this is certainly true for Canada), but our media structure is not wholly dependent on it. The U.S., right from the earliest days of electronic media, took a different path — one that relied almost exclusively on advertising to pay the bills.

As we start thinking about things like the metaverse or other forms of reality that are increasingly intertwined with technology, this reliance on advertising-funded platforms is something we must consider long and hard. It won’t be the companies that initiate the change. An advertising-based business model follows the path of least resistance, making it the shortest route to that mythical unicorn success story. The only way this will change will be if we — as users — demand that it changes.

And we should  — we must — demand it. Ad-based tech giants that have no regard for our personal privacy are one of the greatest threats we face. The more we rely on them, they more they will ask from us.

The Terrors of New Technology

My neighbour just got a new car. And he is terrified. He told me so yesterday. He has no idea how the hell to use it. This isn’t just a new car. It’s a massive learning project that can intimidate the hell out of anyone. It’s technology run amok. It’s the canary in the coal mine of the new world we’re building.

Perhaps – just perhaps – we should be more careful in what we wish for.

Let me provide the back story. His last car was his retirement present to himself, which he bought in 2000. He loved the car. It was a hard top convertible. At the time he bought it it was state of the art. But this was well before the Internet of Things and connected technology. The car did pretty much what you expected it to. Almost anyone could get behind the wheel and figure out how to make it go.

This year, under much prompting from his son, he finally decided to sell his beloved convertible and get a new car. But this isn’t just any car. It is a high-end electric sports car. Again, it is top of the line. And it is connected in pretty much every way you could imagine, and in many ways that would never cross any of our minds.

My neighbour has had this new car for about a week. And he’s still afraid to drive it anywhere. “Gord,” he said, “the thing terrifies me. I still haven’t figured out how to get it to open my garage door.” He has done online tutorials. He has set up a Zoom session with the dealer to help him navigate the umpteen zillion screens that show up on the smart display. After several frustrating experiments, he has learned he needs to pair it with his wifi system at home to get it to recharge properly. No one could just hop behind the wheel and drive it. You would have to sign up for an intensive technology boot camp before you were ready to climb a near-vertical learning curve. The capabilities of this car are mind boggling. And that’s exactly the problem. It’s damned near impossible to do anything with a boggled mind.

The acceptance of new technology has generated a vast body of research. I myself did an exhaustive series of blog posts on it back in 2014. Ever since sociologist Everett Rogers did his seminal work on the topic back in 1962 we have known that there are hurdles to overcome in grappling with something new, and we don’t all clear the hurdles at the same rate. Some of us never clear them at all.

But I also suspect that the market, especially at the high end, have become so enamored with embedding technology that they have forgotten how difficult it might be for some of us to adopt that technology, especially those of us of a certain age.

I am and always have been an early adopter. I geek out on new technology. That’s probably why my neighbour has tapped me to help him figure out his new car. I’m the guy my family calls when they can’t get their new smartphone to work. And I don’t mind admitting I’m slipping behind. I think we’re all the proverbial frogs in boiling water. And that water is technology. It’s getting harder and harder just to use the new shit we buy.

Here’s another thing that drives me batty about technology. It’s a constantly moving target. Once you learn something, it doesn’t stay learnt. It upgrades itself, changes platforms or becomes obsolete. Then you have to start all over again.

Last year, I started retrofitting our home to be a little bit more smart. And in the space of that year, I have sensors that mysteriously go offline, hubs that suddenly stop working, automation routines that are moodier than a hormonal teenager and a lot of stuff that just fits into the “I have no idea” category. When it all works it’s brilliant. I remember that one day – it was special. The other 364 have been a pain in the ass of varying intensity. And that’s for me, the tech guy. My wife sometimes feels like a prisoner in her own home. She has little appreciation for the mysterious gifts of technology that allow me to turn on our kitchen lights when we’re in Timbuktu (should we ever go there and if we can find a good wifi signal).

Technology should be a tool. It should serve us, not hold us slave to its whims. It would be so nice to be able to just make coffee from our new coffee maker, instead of spending a week trying to pair it with our toaster so breakfast is perfectly synchronized.

Oops, got to go. My neighbour’s car has locked him in his garage.

Getting Bitch-Slapped by the Invisible Hand

Adam Smith first talked about the invisible hand in 1759. He was looking at the divide between the rich and the poor and said, in essence, that “greed is good.”

Here is the exact wording:

“They (the rich) are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society.”

The effect of “the hand” is most clearly seen in the wide-open market that emerges after established players collapse and make way for new competitors riding a wave of technical breakthroughs. Essentially, it is a cycle.

But something is happening that may never have happened before. For the past 300 years of our history, the one constant has been the trend of consumerism. Economic cycles have rolled through, but all have been in the service of us having more things to buy.

Indeed, Adam Smith’s entire theory depends on greed: 

“The rich … consume little more than the poor, and in spite of their natural selfishness and rapacity, though they mean only their own conveniency, though the sole end which they propose from the labours of all the thousands whom they employ, be the gratification of their own vain and insatiable desires, they divide with the poor the produce of all their improvements.”

It’s the trickle-down theory of gluttony: Greed is a tide that raises all boats.

The Theory of The Invisible Hand assumes there are infinite resources available. Waste is necessarily built into the equation. But we have now gotten to the point where consumerism has been driven past the planet’s ability to sustain our greedy grasping for more.

Nobel-Prize-winning economist Joseph Stiglitz, for one, recognized that environmental impact is not accounted for with this theory. Also, if the market alone drives things like research, it will inevitably become biased towards benefits for the individual and not the common good.

There needs to be a more communal counterweight to balance the effects of individual greed. Given this, the new age of consumerism might look significantly different.

There is one outcome of market driven-economics that is undeniable: All the power lies in the connection between producers and consumers. Because the world has been built on the predictable truth of our always wanting more, we have been given the ability to disrupt that foundation simply by changing our value equation: buying for the greater good rather than our own self-interest.

I’m skeptical that this is even possible.

It’s a little daunting to think that our future survival relies on our choices as consumers. But this is the world we have made. Consumption is the single greatest driver of our society. Everything else is subservient to it.

Government, science, education, healthcare, media, environmentalism: All the various planks of our societal platform rest on the cross-braces of consumerism. It is the one behavior that rules all the others. 

This becomes important to think about because this shit is getting real — so much faster than we thought possible.

I write this from my home, which is about 100 miles from the village of Lytton, British Columbia. You might have heard it mentioned recently. On June 29, Lytton reported the highest temperature ever recorded in Canada  a scorching 121.3 degrees Fahrenheit (49.6 degrees C for my Canadian readers). That’s higher than the hottest temperature ever recorded in Las Vegas. Lytton is 1,000 miles north of Las Vegas.

As I said, that was how Lytton made the news on June 29. But it also made the news again on June 30. That was when a wildfire burned almost the entire town to the ground.

In one week of an unprecedented heat wave, hundreds of sudden deaths occurred in my province. It’s believed the majority of them were caused by the heat.

We are now at the point where we have to shift the mental algorithms we use when we buy stuff. Our consumer value equation has always been self-centered, based on the calculus of “what’s in it for me?” It was this calculation that made Smith’s Invisible Hand possible.

But we now have to change that behavior and make choices that embrace individual sacrifice. We have to start buying based on “What’s best for us?”

In a recent interview, a climate-change expert said he hoped we would soon see carbon-footprint stickers on consumer products. Given a choice between two pairs of shoes, one that was made with zero environmental impact and one that was made with a total disregard for the planet, he hoped we would choose the former, even if it was more expensive.

I’d like to think that’s true. But I have my doubts. Ethical marketing has been around for some time now, and at best it’s a niche play. According to the Canadian Coalition for Farm Animals, the vast majority of egg buyers in Canada — 98% — buy caged eggs even though we’re aware that the practice is hideously cruel.  We do this because those eggs are cheaper.

The sad fact is that consumers really don’t seem to care about anything other than their own self-interest. We don’t make ethical choices unless we’re forced to by government legislation. And then we bitch like hell about our rights as consumers. “We should be given the choice,” we chant.  “We should have the freedom to decide for ourselves.”

Maybe I’m wrong. I sure hope so. I would like to think — despite recent examples to the contrary of people refusing to wear face masks or get vaccinated despite a global pandemic that took millions of lives — that we can listen to the better angels of our nature and make choices that extend our ability to care beyond our circle of one.

But let’s look at our track record on this. From where I’m sitting, 300 years of continually making bad choices have now brought us to the place where we no longer have the right to make those choices. This is what The Invisible Hand has wrought. We can bitch all we want, but that won’t stop more towns like Lytton B.C. from burning to the ground.

Why Our Brains Struggle With The Threat Of Data Privacy

It seems contradictory. We don’t want to share our personal data but, according to a recent study reported on by MediaPost’s Laurie Sullivan, we want the brands we trust to know us when we come shopping. It seems paradoxical.

But it’s not — really.  It ties in with the way we’ve always been thinking.

Again, we just have to understand that we really don’t understand how the data ecosystem works — at least, not on an instant and intuitive level. Our brains have no evolved mechanisms that deal with new concepts like data privacy. So we have borrowed other parts of the brain that do exist. Evolutionary biologists call this “exaption.”

For example, the way we deal with brands seems to be the same way we deal with people — and we have tons of experience doing that. Some people we trust. Most people we don’t. For the people we trust, we have no problem sharing something of our selves. In fact, it’s exactly that sharing that nurtures relationships and helps them grow.

It’s different with people we don’t trust. Not only do we not share with them, we work to avoid them, putting physical distance between us and them. We’d cross to the other side of the street to avoid bumping into them.

In a world that was ordered and regulated by proximity, this worked remarkably well. Keeping our enemies at arm’s length generally kept us safe from harm.

Now, of course, distance doesn’t mean the same thing it used to. We now maneuver in a world of data, where proximity and distance have little impact. But our brains don’t know that.

As I said, the brain doesn’t really know how digital data ecosystems work, so it does its best to substitute concepts it has evolved to handle those it doesn’t understand at an intuitive level.

The proxy for distance the brain seems to use is task focus. If we’re trying to do something, everything related to that thing is “near” and everything not relevant to it is “far. But this is an imperfect proxy at best and an outright misleading one at worst.

For example, we will allow our data to be collected in order to complete the task. The task is “near.” In most cases, the data we share has little to do with the task we’re trying to accomplish. It is labelled by the brain as “far” and therefore poses no immediate threat.

It’s a bait and switch tactic that data harvesters have perfected. Our trust-warning systems are not engaged because there are no proximate signs to trigger them. Any potential breaches of trust happen well after the fact – if they happen at all. Most times, we’re simply not aware of where our data goes or what happens to it. All we know is that allowing that data to be collected takes us one step closer to accomplishing our task.

That’s what sometimes happens when we borrow one evolved trait to deal with a new situation:  The fit is not always perfect. Some aspects work, others don’t.

And that is exactly what is happening when we try to deal with the continual erosion of online trust. In the moment, our brain is trying to apply the same mechanisms it uses to assess trust in a physical world. What we don’t realize is that we’re missing the warning signs our brains have evolved to intuitively look for.

We also drag this evolved luggage with us when we’re dealing with our favorite brands. One of the reasons you trust your closest friends is that they know you inside and out. This intimacy is a product of a physical world. It comes from sharing the same space with people.

In the virtual world, we expect the brands we know and love to have this same knowledge of us. It frustrates us when we are treated like a stranger. Think of how you would react if the people you love the most gave you the same treatment.

This jury-rigging of our personal relationship machinery to do double duty for the way we deal with brands may sound far-fetched, but marketing brands have only been around for a few hundred years. That is just not enough time for us to evolve new mechanisms to deal with them.

Yes, the rational, “slow loop” part of our brains can understand brands, but the “fast loop” has no “brand” or “data privacy” modules. It has no choice but to use the functional parts it does have.

As I mentioned in a previous post, there are multiple studies that indicate that it’s these parts of our brain that fire instantly, setting the stage for all the rationalization that will follow. And, as our own neuro-imaging study showed, it seems that the brain treats brands the same way it treats people.

I’ve been watching this intersection between technology and human behaviour for a long time now. More often than not, I see this tendency of the brain to make split-section decisions in environments where it just doesn’t have the proper equipment to make those decisions. When we stop to think about these things, we believe we understand them. And we do, but we had to stop to think. In the vast majority of cases, that’s just not how the brain works.

The Problem With Woke Capitalism

I’ve been talking a lot over the last month or two about the concept of corporate trust. Last week I mentioned that we as consumers have a role to play in this: It’s our job to demand trustworthy behavior from the companies we do business with.

The more I thought about that idea, I couldn’t help but put it in the current context of cancel culture and woke capitalism. In this world of social media hyperbole, is this how we can flex our consumer muscles?

I think not. When I think of cancel culture and woke capitalism, I think of signal-to-noise ratio. And when I look at how most corporations signal their virtue, I see a lot of noise but very little signal.

Take Nike, for example. There is probably no corporation in the world that practises more virtue signaling than Nike. It is the master of woke capitalism. But if you start typing Nike into Google, the first suggested search you’ll see is “Nike scandal.” And if you launch that search, you’ll get a laundry list of black eyes in Nike’s day-to-day business practices, including sweatshops, doping, and counterfeit Nike product rings.

The corporate watchdog site ethicalconsumer.org has an extensive entry on Nike’s corporate faux pas. Perhaps Nike needs to spend a little less time preaching and a little more time practicing.

Then there’s 3M. There is absolutely nothing flashy about the 3M brand. 3M is about as sexy as Mr. Wood, my high school Social Studies teacher. Mr. Wood wore polyester suits (granted, it was the ‘70s) and had a look that was more Elvis Costello than Elvis Presley. But he was by far my favorite teacher. And you could trust him with anything.

I think 3M might be the Mr. Wood of the corporate world.

I had the pleasure of working with 3M as a consultant for the last three or four years of my professional life. I still have friends who were and are 3Mers. I have never, in one professional setting, met a more pleasant group of people.

When I started writing this and thought about an example of a trustworthy corporation, 3M was the first that came to mind. The corporate ethos at 3M is, as was told me to me by one vice president, “Minnesota nice.”

Go ahead. Try Googling “3M corporate scandal.” Do you know what comes up? 3M investigating other companies that are selling knockoff N95 facemasks. The company is the one investigating the scandal, not causing it. (Just in case you’re wondering, I tried searching on ethicalconsumer.org for 3M. Nothing came up.)

That’s probably why 3M has been chosen as one of the most ethical companies in the world by the Ethisphere Institute for the last eight consecutive years.

Real trust comes from many places, but a social media campaign is never one of them. It comes from the people you hire and how you treat those people. It comes from how you handle HR complaints, especially when they’re about someone near the top of the corporate ladder. It comes from how you set your product research goals, where you make those products, who you sell those products to, and how you price those products. It comes from how you conduct business meetings, and the language that’s tolerated in the lunchroom.

Real trust is baked in. It’s never painted on.

Social media has armed consumers with a voice, as this lengthy essay in The Atlantic magazine shows. But if we go back to our signal versus noise comparison, everything on social media tends to be a lot of “noise,” and very little signal. Protesting through online channels tends to create hyper-virtuous bubbles that are far removed from the context of day-to-day reality. And — unfortunately — companies are getting very good at responding in kind. Corrupt internal power structures and business practices are preserved, while scapegoats are publicly sacrificed and marketing departments spin endlessly.

As Helen Lewis, the author of The Atlantic piece, said,

“That leads to what I call the “iron law of woke capitalism”: Brands will gravitate toward low-cost, high-noise signals as a substitute for genuine reform, to ensure their survival.”

Empty “mea culpas” and making hyperbolic noise just for the sake of looking good is not how you build trust. Trust is built on consistency and reliability. It is built on a culture that is committed to doing the right thing, even when that may not be the most profitable thing. Trust is built on being “Minnesota nice.”

Thank you, 3M, for that lesson. And thank you, Mr. Wood.

The Profitability Of Trust

Some weeks ago, I wrote about the crisis of trust identified by the Edelman Trust Barometer study and its impact on brands. In that post, I said that the trust in all institutions had been blown apart, hoisted on the petard of our political divides.

We don’t trust our government. We definitely don’t trust the media – especially the media that sits on the other side of the divide. Weirdly, our trust in NGOs has also slipped, perhaps because we suspect them to be politically motivated.

So whom — or what — do we trust? Well, apparently, we still trust corporations. We trust the brands we know. They, alone, seem to have been able to stand astride the chasm that is splitting our culture.

As I said before, I’m worried about that.

Now, I don’t doubt there are well-intentioned companies out there. I know there are several of them. But there is something inherent in the DNA of a for-profit company that I feel makes it difficult to trust them. And that something was summed up years ago by economist Milton Friedman, in what is now known as the Friedman Doctrine. 

In his eponymously named doctrine, Friedman says that a corporation should only have one purpose: “An entity’s greatest responsibility lies in the satisfaction of the shareholders.” The corporation should, therefore, always endeavor to maximize its revenues to increase returns for the shareholders.

So, a business will be trustworthy as long as fits its financial interest to be trustworthy. But what happens when those two things come into conflict, as they inevitably will?

Why is it inevitable, you ask? Why can’t a company be profitable and worthy of our trust? Ah, that’s where, sooner or later, the inevitable conflict will come.

Let’s strip this down to the basics with a thought experiment.

In a 2017 article in the Harvard Business Review, neuroscientist Paul J. Zak talks about the neuroscience of trust. He explains how he discovered that oxytocin is the neurochemical basis of trust — what he has since called The Trust Molecule.

To do this, he set up a classic trust task borrowed from Nobel laureate economist Vernon Smith:

“In our experiment, a participant chooses an amount of money to send to a stranger via computer, knowing that the money will triple in amount and understanding that the recipient may or may not share the spoils. Therein lies the conflict: The recipient can either keep all the cash or be trustworthy and share it with the sender.”

The choice of this task speaks volumes. It also lays bare the inherent conflict that sooner or later will face all corporations: money or trust? This is especially true of companies that have shareholders. Our entire capitalist ethos is built on the foundation of the Friedman Doctrine. Imagine what those shareholders will say when given the choice outlined in Zak’s experiment: “Keep the money, screw the trust.” Sometimes, you can’t have both. Especially when you have a quarterly earnings target to hit.

For humans, trust is our default position. It has been shown through game theory research using the Prisoner’s Dilemma that the best strategy for evolutionary success is one called “Tit for Tat.” In Tit for Tat, our opening position is typically one of trust and cooperation. But if we’re taken advantage of, then we raise our defences and respond in kind.

So, when we look at the neurological basis of trust, consistency is another requirement. We will be willing to trust a brand until it gives a reason not to. The more reliable the brand is in earning that trust, the more embedded that trust will become. As I said in the previous post, consistency builds beliefs and once beliefs are formed, it’s difficult to shake them loose.

Trying to thread this needle between trust and profitability can become an exercise in marketing “spin”: telling your customers you’re trustworthy, while you’re are doing everything possible to maximize your profits. A case in point — which we’ve seen repeatedly — is Facebook and its increasingly transparent efforts to maximize advertising revenue while gently whispering in our ear that we should trust it with our most private information.

Given the potential conflict between trust and profit, is trusting a corporation a lost cause? No, but it does put a huge amount of responsibility on the customer. The Edelman study has made abundantly clear that if there is such a thing as a “market” for trust, then trust is in dangerously short supply. This is why we’re turning to brands and for-profit corporations as a place to put our trust. We have built a society where we believe that’s the only thing we can trust.

Mark Carney, the governor of the Bank of England and the former governor of the Bank of Canada, puts this idea forward in his new book, “Value(s).” In it, he shows how “market economies” have evolved into “market societies” where price determines the value of everything. And corporations will follow profit, wherever it leads.

If we understand that fundamental characteristic of corporations, it does bring an odd kind of power that rests in the hands of consumers.

Markets are not unilateral beasts. They rely on the balance between supply and demand. We form half that equation. It is our willingness to buy that determine how prices are determined in Carney’s “market societies.” So, if we are willing to place our trust in a brand, we can also demand that the brand proves that our trust has not been misplaced, through the rewards and penalties built into the market. 

Essentially, we have to make trust profitable.

The Split-Second Timing of Brand Trust

Two weeks ago, I talked about how brand trust can erode so quickly and cause so many issues. I intimated that advertising and branding have become decoupled — and advertising might even erode brand trust, leading to a lasting deficit.

Now I think that may be a little too simplistic. Brand trust is a holistic thing — the sum total of many moving parts. Taking advertising in isolation is misleading. Will one social media ad for a brand lead to broken trust? Probably not. But there may be a cumulative effect that we need to be aware of.

In looking at the Edelman Trust Barometer study closer, a very interesting picture emerges. Essentially, the study shows there is a trust crisis. Edelman calls it information bankruptcy.

The slide in trust is probably not surprising. It’s hard to be trusting when you’re afraid, and if there’s one thing the Edelman Barometer shows, it’s that we are globally fearful. Our collective hearts are in our mouths. And when this happens, we are hardwired to respond by lowering our trust and raising our defenses.

But our traditional sources for trusted information — government and media — have also abdicated their responsibilities to provide it. They have instead stoked our fears and leveraged our divides for their own gains. NGOs have suffered the same fate. So, if you can’t trust the news, your leaders or even your local charity, who can you trust?

Apparently, you can trust a corporation. Edelman shows that businesses are now the most trusted organizations in North America. Media, especially social media, is the least trusted institution. I find this profoundly troubling, but I’ll put that aside for a future post. For now, let’s just accept it at face value.

As I said in that previous column, we want to trust brands more than ever. But we don’t trust advertising. This creates a dilemma for the marketer.

This all brings to mind a study I was involved with a little over 10 years ago. Working with Simon Fraser University, we wanted to know how the brain responded to trusted brands. The initial results were fascinating — but unfortunately, we never got the chance to do the follow-up study we intended.

This was an ERP study (event-related potential), where we looked at how the brain responded when we showed brand images as a stimulus. ERP studies are useful to better understand the immediate response of the brain to something — the fast loop I talk so much about — before the slow loop has a chance to kick in and rationalize things.

We know now that what happens in this fast loop really sets the stage for what comes after. It essentially makes up the mind, and then the slow loop adds rational justification for what has already been decided.

What we found was interesting: The way we respond to our favorite brands is very similar to the way we respond to pictures of our favorite people. The first hint of this occurred in just 150 milliseconds, about one-sixth of a second. The next reinforcement was found at 400 milliseconds. In that time, less than half a second in total, our minds were made up. In fact, the mind was basically made up in about the same time it takes to blink an eye.  Everything that followed was just window dressing.

This is the power of trust. It takes a split second for our brains to recognize a situation where it can let its guard down. This sets in motion a chain of neurological events that primes the brain for cooperation and relationship-building. It primes the oxytocin pump and gets it flowing. And this all happens just that quickly.

On the other side, if a brand isn’t trusted, a very different chain of events occurs just as quickly. The brain starts arming itself for protection. Our amygdala starts gearing up. We become suspicious and anxious.

This platform of brand trust — or lack of it — is built up over time. It is part of our sense-making machinery. Our accumulating experience with the brand either adds to our trust or takes it away.

But we must also realize that if we have strong feelings about a brand, one way or the other, it then becomes a belief. And once this happens, the brain works hard to keep that belief in place. It becomes virtually impossible at that point to change minds. This is largely because of the split-second reactions our study uncovered.

This sets very high stakes for marketers today. More than ever, we want to trust brands. But we also search for evidence that this trust is warranted in a very different way. Brand building is the accumulation of experience over all touch points. Each of those touch points has its own trust profile. Personal experience and word of mouth from those we know is the highest. Advertising on social media is one of the lowest.

The marketer’s goal should be to leverage trust-building for the brand in the most effective way possible. Do it correctly, through the right channels, and you have built trust that’s triggered in an eye blink. Screw it up, and you may never get a second chance.

The Deconstruction of Trust

Just over a week ago, fellow Insider Steven Rosenbaum wrote a post entitled “Trust Is In Decline Worldwide.” He quotes from the Edelman Trust Barometer Report for 2021. There, graph after graph shows this decline. And that feels exactly right. The Barometer “reveals an epidemic of misinformation and widespread mistrust of societal institutions and leaders around the world.”

Here in the ad biz, the decline of trust is nothing new. We’ve been seeing it slip for at least the last decade.

But that is not a universal truth. Yes, trust in advertising is in decline. But trust in brands — at least, some brands — has never been higher. And that is indicative of the decoupling we’re seeing between the concept of brand and the practice of advertising. One used to support the other. Now, even when an ad works, it may be stripping the trust from a brand.

This decline in advertising trust also varies from generation to generation. An Ofcom study in the UK of young adults 16 to 34 found that 91.6% of all respondents had little or no trust in ads. The same study found that if you were looking for trustworthy sources, 73.5% would go to online reviews or recommendations of friends.

One reason for this erosion in trust is that advertising has been slumming. Social media advertising is the least trustworthy channel that exists. The vast majority of us don’t trust what we see on it. Yet the advertising dollars continue to pour into social media.

Yet more than ever, we want to trust a brand. The Edelman Report shows that business is the most trusted institution, ahead of NGOs, government and media. And the brands that are rising to the challenge are taking a more holistic approach to brand management.

More than ever, brands are not built on advertising. They are built on consumer experience, on ideals and on meeting promises.  In short, they are built on instilling trust. Consumers, in turn are making trust a bigger deal. Those aged 18 to 34, that very same demo that has no trust in advertising, is the first to say brand trust matters more than ever. They’re just looking for proof of that trust in different places.

But why is trust important? That seems like a dumb question, but it’s not. There are deeper levels of understanding that are required here. And we might just find the answer in southern Italy.

Trust to the north and south of Naples

Italy has an economic problem. It’s always been there, but it definitely got worse after World War Two. It’s called the Mezzogiorno Problem.

Mezzogiorno means “noon” in Italian. But it’s also a label for the south of Italy. Like many things in Italian culture, it can make even problems sound charming and romantic. It has something to do with being sunny.

Italy has two economies. The North’s economy has always been more robust than the South’s. Per capita income in the Mezzogiorno is 60% of the national average. Unemployment is twice as high. Despite repeated attempts by the government to kickstart the economy of the South, the money and talent in Italy typically flow north of Naples.

The roots of the Mezzogiorno problem go to a not totally surprising place: a lack of trust. Trust is also called social capital. And southern Italy has less social capital than the North. Part of this has to do with geography. Villages in southern Italy are more isolated and there is less interaction between them. Part of it has to do with systemic corruption and crime. Part of it has to do with something called Campanilismo — where Italian loyalties belong first to their family, second to their village or city, third to their immediate region and, lastly, to any notion of belonging to a nation. People from the South have trouble trusting anyone not from their inner circle.

For all these reasons, the co-ops that transformed the agricultural industry in the north of Italy never gained a foothold in the South. If you were to look for an example of how low trust can lead to negative outcomes for all, it would be hard to find a better one than southern Italy.

But what does this have to do with advertising? That begins to become clear when we look at the impact trust has on our brains.

Our Brains On Trust

Neuroeconomist Paul J. Zak has found that trust plays a key role in the functioning of our brains. When trust is present, our brain produces oxytocin, which Zak calls the trust molecule. It literally rewards our brain when we work together with others. It pushes us to cooperate rather than be focused exclusively on our own self-interest. This is exactly what was missing in southern Italy.

But there’s another side to this: the dark side of oxytocin. It can also cause us emotional pain in stressful social situations. And these episodes tend to get embedded in us as bad memories, leading to a triggering of fear or anxiety in the future.

We have to think more carefully about this question of trust. The whole goal of advertising is simply to get an impression to the right person. I suspect most marketers might define an unsuccessful ad as one that gets ignored. But the reality might be far worse. An ad that is shown in an untrusted channel might cause an emotional deficit, leading to the creation of future anxiety about or animosity towards a brand.

Once this happens, the game is over. You now have a Mezzogiorno of marketing.