Re-engineering the Workplace

What happens when over 60,000 Microsoft employees are forced to work from home because of a pandemic? Funny you should ask. Microsoft just came out with a large scale study that looks at exactly that question. The good news is that employees feel more included and supported by their managers than ever. But there is bad news as well:

“Our results show that firm-wide remote work caused the collaboration network of workers to become more static and siloed, with fewer bridges between disparate parts. Furthermore, there was a decrease in synchronous communication and an increase in asynchronous communication. Together, these effects may make it harder for employees to acquire and share new information across the network.”

To me, none of this is surprising. On a much smaller scale, we experienced exactly this when we experimented with a virtual workplace a decade ago. In fact, this virtually echoes the pros and cons of a virtual workplace that I have talked about in other previous posts, particularly the two (one, two) that dealt with the concept of “burstiness” – those magical moments of collaborative creativity experienced when a room full of people get “on a roll.”

What this study does do, however, is provide empirical evidence to back up my hunches. There is nothing like a global pandemic to allow the recruitment of a massive sample to study the impact of working from home.

In many, many aspects of our society, COVID was a game changer. It forcefully pushed us along the adoption curve, mandating widescale adoption of technologies that we probably would have been much happier to simply dabble in. The virtual workplace was one of these, but there were others.

Yet this example in particular, because of the breadth of its impact, gives us an insightful glimpse into one particular trend: we are increasingly swapping the ability to physically be together for a virtual connection mediated through technology. The first of these is a huge part of our evolved social strategies that are hundreds of thousands of years in the making. The second is barely a couple of decades old. There are bound to be consequences, both intended and unintended.

In today’s post, I want to take another angle to look at the pros and cons of a virtual workplace – by exploring how music has been made over the past several decades.

Supertramp and Studio Serendipity

My brother-in-law is a walking encyclopedia of music trivia. He put me on to this particular tidbit from one of my favorite bands of the 70’s and 80’s – Supertramp.

The band was in the studio working on their Breakfast in America album. In the corner of the studio, someone was playing a handheld video game during a break in the recording: Mattel’s Football. The game had a distinctive double beep on your fourth down. Roger Hodgson heard this and now that same sound can be heard at the 3:24 mark of The Logical Song, just after the lyric “d-d-digital”.

This is just one example of what I would call “Studio Serendipity.” For every band, every album, every song that was recorded collaboratively in the studio, there are examples like this of creativity that just sprang from people being together. It is an example of that “burstiness” I was talking about in my previous posts.

Billie Eilish and the Virtual Studio

But for this serendipity to even happen, you had to get into a recording studio. And the barriers to doing that were significant. You had to get a record deal – or – if you were going independent, save up enough money to rent a studio.

For the other side of the argument, let’s talk about Billie Eilish. Together with her brother Finneas, these two embody virtual production. We first heard about Billie in 2015 when they recorded Ocean Eyes in a bedroom in the family’s tiny LA Bungalow and uploaded it to SoundCloud. Billie was 14 at the time. The song went viral overnight and it did lead to a record deal, but their breakout album, When We All Fall Asleep, Where Do We Go?, was recorded in that same bedroom.

Digital technology dismantled the vertical hierarchy of record labels and democratized the industry. If that hadn’t happened, we might never have heard of Billie Eilish.

The Best of Both Worlds

Choosing between virtual and physical workplaces is not a binary choice. In the two examples I gave, creativity was a hybrid that came from both solitary inspiration and collaborative improvisation. The first thrives in a virtual workplace and the second works best when we’re physically together. There are benefits to both models, and these benefits are non-exclusive.

A hybrid model can give you the best of both worlds, but you have to take into account a number of things that might be a stretch for the typical HR policies  – things like evolutionary psychology, cognition and attentional focus, non-verbal communication strategies and something that neuroscientist Antonio Damasio calls “somatic markers.”  According to Damasio, we think as much with our bodies as we do with our brains.

Our performance in anything is tied to our physical surroundings. And when we are looking to replace a physical workplace with a virtual substitute, we have to appreciate the significance this has on us subconsciously.

Re-engineering Communication

Take communication, for example. We may feel that we have more ways than ever to communicate with our colleagues, including an entire toolbox of digital platforms. But none of them account for this simple fact: the majority of our communication is non-verbal. We communicate with our eyes, our hands, our bodies, our expression and the tone of our voice. Trying to squeeze all this through the trickle of bandwidth that technology provides, even when we have video available, is just going to produce frustration. It is no substitute for being in the same room together, sharing the same circumstances. It would be like trying to race in a car with an engine where only one cylinder was working.

This is perhaps the single biggest drawback to the virtual workplace – this lack of “somatic” connection – the shared physical bond that underlies some much of how we function. When you boil it down, it is the essential ingredient for “burstiness.” And I just don’t think we have a technological substitute for it – not at this point, anyway.

But the same person who discovered burstiness does have one rather counterintuitive suggestion. If we can’t be in the same room together, perhaps we have to “dumb down” the technology we use. Anita Williams Wooley suggests the good, old-fashioned phone call might truly be the next best thing to being there.

Adrift in the Metaverse

Humans are nothing if not chasers of bright, shiny objects. Our attention is always focused beyond the here and now. That is especially true when here and now is a bit of a dumpster fire.

The ultrarich know that this is part of the human psyche, and they are doubling down their bets on it. Jeff Bezos and Elon Musk are betting on space. But others — including Mark Zuckerberg — are betting on something called the metaverse.

Just this past summer, Zuck told his employees about his master plan for Facebook:

“Our overarching goal across all of (our) initiatives is to help bring the metaverse to life.”

So what exactly is the metaverse? According to Wikipedia, it is

“a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the Internet.”

The metaverse is a world of our own making, which exists in the dimensions of a digital reality. There we imagine we can fix what we screwed up in the maddeningly unpredictable real world. It is the ultimate in bright, shiny objects.

Science fiction and the entertainment industry have been toying with the idea of the metaverse for some time now. The term itself comes from Neal Stephenson’s 1992 novel “Snow Crash.” It has been given the Hollywood treatment numerous times, notably in “The Matrix” and “Ready Player One.” But Silicon Valley venture capitalists are rushing to make fiction into fact.

You can’t really blame us for throwing in the towel on the world we have systematically wrecked. There are few glimmers of hope out there in the real world. What we have wrought is painful to contemplate. So we are doing what we’ve always done, reach for what we want rather than fix what we have. Take the Reporters Without Borders Uncensored Library, for example.

There are many places in the real world where journalism is censored, like Russia, the Middle East, Vietnam and China. But in the metaverse, there is the option of leapfrogging over all the political hurdles we stumble over in the real world. So Reporters without Borders and two German creative agencies built a meta library in the meta world of Minecraft. Here, censored articles are made into virtual books, accessible to all who want to check them out.

It’s hard to find fault with this. Censorship is a tool of oppression. Here, a virtual world offered an inviting loophole to circumvent it. The metaverse came to the rescue. What is the problem with that?

The biggest risk is this: We weren’t built for the metaverse. We can probably adapt to it, somewhat, but everything that makes us tick has evolved in a flesh and blood world, and — to quote a line from Joni Mitchell’s “Big Yellow Taxi,” “You don’t know what you’ve got till it’s gone.”

It’s fair to say that right now the metaverse is a novelty. Most of your neighbors, friends and family have never heard of it. But odds are it will become our life. In a 2019 article called “Welcome to the Mirror World” in Wired, Kevin Kelley explained, “we are building a 1-to-1 map of almost unimaginable scope. When it’s complete, our physical reality will merge with the digital universe.”

In a Forbes article, futurist Cathy Hackl gives us an example of what this merger might look like:

“Imagine walking down the street. Suddenly, you think of a product you need. Immediately next to you, a vending machine appears, filled with the product and variations you were thinking of. You stop, pick an item from the vending machine, it’s shipped to your house, and then continue on your way.”

That sounds benign — even helpful. But if we’ve learned one thing it’s this: When we try to merge technology with human behavior, there are always unintended consequences that arise. And when we’re talking about the metaverse, those consequences will likely be massive.

It is hubristic in the extreme to imagine we can engineer a world that will be a better match for our evolved humanware mechanics than the world we actually evolved within. It’s sheer arrogance to imagine we can build that world, and also arrogant to imagine that we can thrive within it.

We have a bright, shiny bias built into us that will likely lead us to ignore the crumbling edifice of our reality. German futurist Gerd Leonhard, for one, warns us about an impending collision between technology and humanity:

“Technology is not what we seek but how we seek: the tools should not become the purpose. Yet increasingly, technology is leading us to ‘forget ourselves.’”

Imagine a Pandemic without Technology

As the writer of a weekly post that tends to look at the intersection between human behavior and technology, the past 18 months have been interesting – and by interesting, I mean a twisted ride through gut-wrenching change unlike anything I have ever seen before.

I can’t even narrow it down to 18 months. Before that, there was plenty more that was “unprecedented” – to berrypick a word from my post from a few weeks back. I have now been writing for MediaPost in one place or another for 17 years. My very first post was on August 19, 2004. That was 829 posts ago. If you add the additional posts I’ve done for my own blog – outofmygord.com – I’ve just ticked over 1,100 on my odometer.  That’s a lot of soul searching about technology. And the last several months have still been in a class by themselves.

Now, part of this might be where my own head is at. Believe it or not, I do sometimes try to write something positive. But as soon as my fingers hit the keyboard, things seem to spiral downwards. Every path I take seems to take me somewhere dark. There has been precious little that has sparked optimism in my soul.

Today, for example, prior to writing this, I took three passes at writing something else. Each quickly took a swerve towards impending doom. I’m getting very tired of this. I can only imagine how you feel, reading it.

So I finally decided to try a thought experiment. “What if,” I wondered, “we had gone through the past 17 months without the technology we take for granted? What if there was no Internet, no computers, no mobile devices? What if we had lived through the Pandemic with only the technology we had – say – a hundred years ago, during the global pandemic of the Spanish Flu starting in 1918? Perhaps the best way to determine the sum total contribution of technology is to do it by process of elimination.”

The Cons

Let’s get the negatives out of the way. First, you might say that technology enabled the flood of misinformation and conspiracy theorizing that has been so top-of-mind for us. Well, yes – and no.

Distrust in authority is nothing new. It’s always been there, at one end of a bell curve that spans the attitudes of our society. And nothing brings the outliers of society into global focus faster than a crisis that affects all of us.

There was public pushback against the very first vaccine ever invented; the smallpox vaccine. Now granted, the early method was to rub puss from a cowpox blister into a cut in your skin and hope for the best. But it worked. Smallpox is now a thing of the past.

And, if we are talking about pushback against public health measures, that’s nothing new either. Exactly the same thing happened during the 1918-1919 Pandemic. Here’s one eerily familiar excerpt from a journal article looking at the issue, “Public-gathering bans also exposed tensions about what constituted essential vs. unessential activities. Those forced to close their facilities complained about those allowed to stay open. For example, in New Orleans, municipal public health authorities closed churches but not stores, prompting a protest from one of the city’s Roman Catholic priests.”

What is different, thanks to technology, is that public resistance is so much more apparent than it’s ever been before. And that resistance is coming with faces and names we know attached. People are posting opinions on social media that they would probably never say to you in a face-to-face setting, especially if they knew you disagreed with them. Our public and private discourse is now held at arms-length by technology. Gone are all the moderating effects that come with sharing the same physical space.

The Pros

Try as I might, I couldn’t think of another “con” that technology has brought to the past 17 months. The “pro” list, however, is far too long to cover in this post, so I’ll just mention a few that come immediately to mind.

Let’s begin with the counterpoint to the before-mentioned “Con” – the misinformation factor. While misinformation was definitely spread over the past year and a half, so was reliable, factual information. And for those willing to pay attention to it, it enabled us to find out what we needed to in order to practice public health measures at a speed previously unimagined. Without technology, we would have been slower to act and – perhaps – fewer of us would have acted at all. At worst, in this case technology probably nets out to zero.

But technology also enabled the world to keep functioning, even if it was in a different form. Working from home would have been impossible without it. Commercial engines kept chugging along. Business meetings switched to online platforms. The Dow Jones Industrial Average, as of the writing of this, is over 20% higher than it was before the pandemic. In contrast, if you look at stock market performance over the 1918 – 1919 pandemic, the stock market was almost 32% lower at the end of the third wave as it was at the start of the first. Of course, there are other factors to consider, but I suspect we can thank technology for at least some of that.

It’s easy to point to the negatives that technology brings, but if you consider it as a whole, technology is overwhelmingly a blessing.

What was interesting to me in this thought experiment was how apparent it was that technology keeps the cogs of our society functioning more effectively, but if there is a price to be paid, it typically comes at the cost of our social bonds.

Why is Everything Now ‘Unprecedented’?

Just once, I would like to get through one day without hearing the word “unprecedented.” And I wonder, is that just the media trying to get a click, or is the world truly that terrible?

Take the Olympics. In my lifetime, I’ve never seen an Olympics like this one. Empty stands. Athletes having to leave within 48 hours of their last event. Opening and closing ceremonies unlike anything we have ever seen. It’s, well — unprecedented.

The weather is unprecedented. What is happening in politics is unprecedented. The pandemic is unprecedented, at least in our lifetimes. I don’t know about you, but I feel like I’m watching a blockbuster where the world will eventually end — but we just haven’t got to that part of the movie yet. I feel the palpable sensation of teetering on the edge of a precipice. And I’m pretty sure it’s happened before.

Take the lead-ups to the two world wars, for example. If you plot a timeline of the events that led to either July 20, 1914 or Sept. 1, 1939, there is a noticeable acceleration of momentum. At first, the points on the timeline are spread apart, giving the world a chance to once again catch its collective breath. But as we get closer and closer to those dates circled in red, things pick up. There are cascades of events that eventually lead to the crisis point. Are we in the middle of such a cascade?

Part of this might just be network knock-on effects that happen in complex environments. But I also wonder if we just become a little shell- shocked, being nudged into a numb acceptance of things we would have once found intolerable.

Author and geographer Jared Diamond calls this “creeping normality. “ In his book “Collapse: How Societies Choose to Fail or Succeed,” he used the example of the deforestation and environmental degradation that happened on Easter Island — and how, despite the impending doom, the natives still decided to chop down the last tree: “I suspect, though, that the disaster happened not with a bang but with a whimper. After all, there are those hundreds of abandoned statues to consider. The forest the islanders depended on for rollers and rope didn’t simply disappear one day—it vanished slowly, over decades.”

Creeping normality continually and imperceptibly nudges us from the unacceptable to the acceptable and we don’t even notice it’s happening. It’s a cognitive bias that keeps us from seeing reality for what it is. Creeping normality is what happens when our view of the world comes through an Overton Window.

I have mentioned the concept of the Overton Window before.  Overton Window was first introduced by political analyst Joseph Lehman and was named after his colleague, Joseph Overton. It was initially coined to show that the political policies that the public finds acceptable will shift over time. What was once considered unthinkable can eventually become acceptable or even popular, given the shifting sensitivities of the public. As an example, the antics of Donald Trump would once be considered unacceptable in any public venue — but as our reality shifted, we saw them eventually become mainstream from an American president.

I suspect that the media does the same thing with our perception of the world in general. The news media demands the exceptional. We don’t click on “ordinary.” So it consistently shifts our Overton Window of what we pay attention to, moving us toward the outrageous. Things that once would have caused riots are now greeted with a yawn. This is combined with the unrelenting pace of the news cycle. What was outrageous today slips into yesterday, to be replaced with what is outrageous today.

And while I’m talking about outrageous, let’s look at the root of that term. The whole point of something being outrageous is to prompt us into being outraged — or moved enough to take action. And, if our sensitivity to outrage is constantly being numbed, we are no longer moved enough to act.

When we become insensitive to things that are unprecedented, we’re in a bad place. Our trust in information is gone. We seek information that comforts us that the world is not as bad as we think it is. And we ignore the red flags we should be paying attention to.

If you look at the lead-ups to both world wars, you see this same pattern. Things that happened regularly in 1914 or 1939, just before the outbreak of war, would have been unimaginable just a few years earlier. The momentum of mayhem picked up as the world raced to follow a rapidly moving Overton Window. Soon, before we knew it, all hell broke loose and the world was left with only one alternative: going to war.

An Overton Window can just happen, or it can be intentionally planned. Politicians from the fringes, especially the right, have latched on to the Window, taking something intended to be an analysis and turning it into a strategy. They now routinely float “policy balloons” that they know are on the fringe, hoping to trigger a move in our Window to either the right or left. Over time, they can use this strategy to introduce legislation that would once have been vehemently rejected.

The danger in all this is the embedding of complacency. Ultimately, our willingness to take action against threat is all that keeps our society functioning. Whether it’s our health, our politics or our planet, we have to be moved to action before it’s too late.

When the last tree falls on Easter Island, we don’t want to be the ones with the axe in our hands.

A Hybrid Work Approach To Creativity

Last week I introduced the concept of burstiness, meaning the bursts of creativity that can happen when a group is on a roll.

Burstiness requires trust: a connection in the group that creates psychological safety. But I would go one step further. It also requires respect — an intuitive acknowledgement of the value of contribution from everyone in the group. It’s a type of recursive high that builds on itself, as each contribution sparks something else from the group. It’s like the room has caught fire and, as the burstiness continues, everyone tries to add to the flames.

We’ve used jazz as an example of burstiness. But there are other great examples, like theater improv. Research has found that the brain actually changes how it acts when it’s engaged in these activities, according to a Psychology Today article.

A 2008 fMRI study found that that different parts of the brain lit up when musicians improvised rather than just playing scales. The brain shifted into a different gear. The dorsolateral prefrontal cortex decreased in activity, and the medial prefrontal cortex increased. This is a fascinating finding, because the dorsolateral prefrontal cortex is the part of the brain where we look at ourselves critically and the media prefrontal cortex is linked with language and creativity. A follow-up study was done on improv actors, and the findings were remarkably similar.

This modality of the brain is important to understand. If we can create the conditions that lead to creativity, magic can happen.

Also, this is a team sport. Creativity is almost never exclusively a solo pursuit.

In 1995, Alfonso Montuori and Ronald Purser wrote an essay deconstructing the myth of the lone genius. In it, they showed that creativity almost always relies on social interaction. There is a system of creativity, an ecology that creates the conditions necessary for inspiration.

We love the story of the eccentric solitary genius toiling away in a loft somewhere, but it almost never happens that way. Da Vinci and Michelangelo had “schools” of apprentices that helped turn out their masterpieces. Mozart was a pretty social guy whose creativity fed off interactions with his court patrons and other composers of the era.

But we also have to understand that a little creative magic can go a long way. You don’t have to be 100% creative all the time. In a corporate setting, creativity is a spark. Then there is a lot of non-creative work required to fan it into a flame.

Given this, perhaps the advent of hybrid virtual-traditional workplace models might be a suitable fit for encouraging inspiration, if we use them correctly and not try to force-fit our intentions into the wrong workplace framework.

A virtual work-from-home environment is great for efficiency and getting stuff done. Our boss isn’t hovering over our cubicle asking us if we “have a second” to discuss whatever happens to be on his mind at this particular moment. We’re not wasting hours in tedious, unproductive meetings or on a workplace commute.

On the flip side, if creativity is our goal, there is no substitute for being “in the room where it happens.” A full bandwidth of human interaction is required for the psychological safety we need to take creative risks. These creative summits need to be in person and carefully constructed to provide the conditions needed for creativity. Interdisciplinary and diverse teams who know and trust each other implicitly need to be physically brought together for “improv” sessions. The rules of engagement should be clearly understood.

And unless bosses can participate fully “in kind” (a great example of this is Trevor Noah in the “Daily Show” example I mentioned last week from Adam Grant’s “Worklife” podcast), they should stay the hell out of the room.

Be ruthless about limiting attendance for creative sessions to just those who bring something to the table and have already built a psychological “safe space” with each other through face-to-face connections. Just one wrong person in the room can short-circuit the entire exercise.

This hybrid model doesn’t allow for the serendipity of creativity — that chance interaction in the lunchroom or the offhand comment that is the first domino to fall in an inspirational chain reaction. It also puts a constrained timeline on creativity, forcing it into specific squares on a calendar. But at least it recognizes the unique prerequisites of creativity and addresses them in an honest manner.

One last thought on creativity. Again, we go back to Anita Williams Wooley, the Carnegie Mellon professor who first identified “burstiness.” In a 2018 study with Christopher Riedl, she shows that even with a remote workplace, “bursty” communications can lead to more innovative teams.

“People often think that constant communication is most effective, but actually, we find that bursts of rapid communication, followed by longer periods of silence, are telltale signs of successful teams,” she notes.

This communication template mimics the hybrid model I mentioned before. It compartmentalizes our work activities, adopting communication styles that best suit the different modalities required: the effectiveness of collaboration and innovation, and the efficiency of getting the work done. Wooley suggests using a synchronous form of communication for the “bursts” — perhaps even the old-fashioned phone. And then leave everybody alone for a period of radio silence and let them get their work done.

Seeking “Burstiness” When Working from Home

I was first introduced to the concept of “burstiness” by psychologist Adam Grant in his podcast, “Worklife.” In one episode, he visits the writers’ room at “The Daily Show” and probes the creativity that crackles when those writers were on a roll. A big part of that energy, according to Grant, was because of “burstiness.”

The term was initially coined by Anita Williams Woolley, associate professor of organizational behavior and theory at Carnegie Mellon University.

Burstiness is, according to Grant,

“like the best moments in improv jazz. Someone plays a note, someone else jumps in with a harmony, and pretty soon, you have a collective sound that no one planned. Most groups never get to that point, but you know burstiness when you see it. At ‘The Daily Show,’ the room just literally sounds like it’s bursting with ideas.”

Last week, we reran a post I wrote at the beginning of the pandemic wondering if we might be forsaking some important elements of team effectiveness in our rush to embrace the virtual workplace. Our brains have evolved to be most effective in creating relationships with others when we’re face-to -ace. There is a rich bandwidth of communication through which we build trust in others that is reliant on physical proximity.

Zoom just doesn’t cut it.

So, would this idea of burstiness be sacrificed in a remote work environment? Let’s dig a little deeper.

Grant outlines the things that need to be in place for burstiness to occur:

  • Spending time with each other
  • Psychological safety
  • A proper balance of structure
  • The right people in the room

Let’s look at these in reverse order.

The right people in the room

First, how do you get the right people in the room – or, in the case of a remote workforce, on the same Zoom call? Here, diversity seems to be the key. You need different perspectives. Creativity comes from diversity, not sameness.

Dr. Woolley offers the example of the Kennedy and Lincoln presidential cabinets. Kennedy’s cabinet was comprised of Ivy League intellectual elites who all came from similar backgrounds and had the same ideological view of the world. Lincoln’s cabinet was fractious, to say the least. After his election, Lincoln reached out to bitter rivals who ran against him for the presidency — including Salmon Chase and William Seward — and gave them senior positions in his cabinet. Lincoln’s cabinet is generally considered by historians as the most effective political team in American history. Kennedy’s cabinet suffered from a debilitating case of “groupthink” that launched the Bay of Pigs invasion and almost ignited another world war.

There is no reason why a virtual workplace cannot embrace diversity. You just have to recruit the right people through bias-resistant practices like blind auditions and using multiple interviewers.

A proper balance of structure

Grant says the right structure provides the rules of engagement for creative bursts. You need some basic guidelines so you can focus on the work and not the mechanics of the process. To use Grant’s example, jazz improv seems unstructured, but there are actually some commonly understood ground rules on which the improvisation is built.

This brings to mind psychologist Mihaly Csikszentmihalyi’s concept of Flow, the condition where creativity just flows naturally. Structure allows Flow to happen by providing the structure the brain needs to focus wholly on the task at hand. There is no reason why the structure can’t apply equally to traditional and virtual work teams.

But the next two conditions get a little trickier for the virtual workplace. Let’s look at them together:

Psychological safety and spending time together

Psychological safety is a term coined by Harvard Business School professor Amy Edmondson. When it comes to promoting “burstiness,” psychology safety gives us the confidence to contribute without being punished or ridiculed. It allows us to take creative risk. Another word for it would be trust.

And that brings us to second part — spending time together — and the challenge for that in a virtual workplace. Trust is not built overnight, and it is not built over Zoom or Slack.

As I said in my previous post, organizational behavior specialist Mahdi Roghanizad from Ryerson University has found that the connections in our brains that create trust may not even be activated unless we’re face-to-face with someone. We need eye contact to nudge this part of ourselves into life.

So, if creativity is a requirement in the workplace, and connecting face-to-face is required to foster creativity, is a virtual office a non-starter? Not necessarily. In my next post, I’ll look at some ways we might have still be able to have burstiness — even when we’re at home in our pajamas.

Getting Bitch-Slapped by the Invisible Hand

Adam Smith first talked about the invisible hand in 1759. He was looking at the divide between the rich and the poor and said, in essence, that “greed is good.”

Here is the exact wording:

“They (the rich) are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society.”

The effect of “the hand” is most clearly seen in the wide-open market that emerges after established players collapse and make way for new competitors riding a wave of technical breakthroughs. Essentially, it is a cycle.

But something is happening that may never have happened before. For the past 300 years of our history, the one constant has been the trend of consumerism. Economic cycles have rolled through, but all have been in the service of us having more things to buy.

Indeed, Adam Smith’s entire theory depends on greed: 

“The rich … consume little more than the poor, and in spite of their natural selfishness and rapacity, though they mean only their own conveniency, though the sole end which they propose from the labours of all the thousands whom they employ, be the gratification of their own vain and insatiable desires, they divide with the poor the produce of all their improvements.”

It’s the trickle-down theory of gluttony: Greed is a tide that raises all boats.

The Theory of The Invisible Hand assumes there are infinite resources available. Waste is necessarily built into the equation. But we have now gotten to the point where consumerism has been driven past the planet’s ability to sustain our greedy grasping for more.

Nobel-Prize-winning economist Joseph Stiglitz, for one, recognized that environmental impact is not accounted for with this theory. Also, if the market alone drives things like research, it will inevitably become biased towards benefits for the individual and not the common good.

There needs to be a more communal counterweight to balance the effects of individual greed. Given this, the new age of consumerism might look significantly different.

There is one outcome of market driven-economics that is undeniable: All the power lies in the connection between producers and consumers. Because the world has been built on the predictable truth of our always wanting more, we have been given the ability to disrupt that foundation simply by changing our value equation: buying for the greater good rather than our own self-interest.

I’m skeptical that this is even possible.

It’s a little daunting to think that our future survival relies on our choices as consumers. But this is the world we have made. Consumption is the single greatest driver of our society. Everything else is subservient to it.

Government, science, education, healthcare, media, environmentalism: All the various planks of our societal platform rest on the cross-braces of consumerism. It is the one behavior that rules all the others. 

This becomes important to think about because this shit is getting real — so much faster than we thought possible.

I write this from my home, which is about 100 miles from the village of Lytton, British Columbia. You might have heard it mentioned recently. On June 29, Lytton reported the highest temperature ever recorded in Canada  a scorching 121.3 degrees Fahrenheit (49.6 degrees C for my Canadian readers). That’s higher than the hottest temperature ever recorded in Las Vegas. Lytton is 1,000 miles north of Las Vegas.

As I said, that was how Lytton made the news on June 29. But it also made the news again on June 30. That was when a wildfire burned almost the entire town to the ground.

In one week of an unprecedented heat wave, hundreds of sudden deaths occurred in my province. It’s believed the majority of them were caused by the heat.

We are now at the point where we have to shift the mental algorithms we use when we buy stuff. Our consumer value equation has always been self-centered, based on the calculus of “what’s in it for me?” It was this calculation that made Smith’s Invisible Hand possible.

But we now have to change that behavior and make choices that embrace individual sacrifice. We have to start buying based on “What’s best for us?”

In a recent interview, a climate-change expert said he hoped we would soon see carbon-footprint stickers on consumer products. Given a choice between two pairs of shoes, one that was made with zero environmental impact and one that was made with a total disregard for the planet, he hoped we would choose the former, even if it was more expensive.

I’d like to think that’s true. But I have my doubts. Ethical marketing has been around for some time now, and at best it’s a niche play. According to the Canadian Coalition for Farm Animals, the vast majority of egg buyers in Canada — 98% — buy caged eggs even though we’re aware that the practice is hideously cruel.  We do this because those eggs are cheaper.

The sad fact is that consumers really don’t seem to care about anything other than their own self-interest. We don’t make ethical choices unless we’re forced to by government legislation. And then we bitch like hell about our rights as consumers. “We should be given the choice,” we chant.  “We should have the freedom to decide for ourselves.”

Maybe I’m wrong. I sure hope so. I would like to think — despite recent examples to the contrary of people refusing to wear face masks or get vaccinated despite a global pandemic that took millions of lives — that we can listen to the better angels of our nature and make choices that extend our ability to care beyond our circle of one.

But let’s look at our track record on this. From where I’m sitting, 300 years of continually making bad choices have now brought us to the place where we no longer have the right to make those choices. This is what The Invisible Hand has wrought. We can bitch all we want, but that won’t stop more towns like Lytton B.C. from burning to the ground.

Why Our Brains Struggle With The Threat Of Data Privacy

It seems contradictory. We don’t want to share our personal data but, according to a recent study reported on by MediaPost’s Laurie Sullivan, we want the brands we trust to know us when we come shopping. It seems paradoxical.

But it’s not — really.  It ties in with the way we’ve always been thinking.

Again, we just have to understand that we really don’t understand how the data ecosystem works — at least, not on an instant and intuitive level. Our brains have no evolved mechanisms that deal with new concepts like data privacy. So we have borrowed other parts of the brain that do exist. Evolutionary biologists call this “exaption.”

For example, the way we deal with brands seems to be the same way we deal with people — and we have tons of experience doing that. Some people we trust. Most people we don’t. For the people we trust, we have no problem sharing something of our selves. In fact, it’s exactly that sharing that nurtures relationships and helps them grow.

It’s different with people we don’t trust. Not only do we not share with them, we work to avoid them, putting physical distance between us and them. We’d cross to the other side of the street to avoid bumping into them.

In a world that was ordered and regulated by proximity, this worked remarkably well. Keeping our enemies at arm’s length generally kept us safe from harm.

Now, of course, distance doesn’t mean the same thing it used to. We now maneuver in a world of data, where proximity and distance have little impact. But our brains don’t know that.

As I said, the brain doesn’t really know how digital data ecosystems work, so it does its best to substitute concepts it has evolved to handle those it doesn’t understand at an intuitive level.

The proxy for distance the brain seems to use is task focus. If we’re trying to do something, everything related to that thing is “near” and everything not relevant to it is “far. But this is an imperfect proxy at best and an outright misleading one at worst.

For example, we will allow our data to be collected in order to complete the task. The task is “near.” In most cases, the data we share has little to do with the task we’re trying to accomplish. It is labelled by the brain as “far” and therefore poses no immediate threat.

It’s a bait and switch tactic that data harvesters have perfected. Our trust-warning systems are not engaged because there are no proximate signs to trigger them. Any potential breaches of trust happen well after the fact – if they happen at all. Most times, we’re simply not aware of where our data goes or what happens to it. All we know is that allowing that data to be collected takes us one step closer to accomplishing our task.

That’s what sometimes happens when we borrow one evolved trait to deal with a new situation:  The fit is not always perfect. Some aspects work, others don’t.

And that is exactly what is happening when we try to deal with the continual erosion of online trust. In the moment, our brain is trying to apply the same mechanisms it uses to assess trust in a physical world. What we don’t realize is that we’re missing the warning signs our brains have evolved to intuitively look for.

We also drag this evolved luggage with us when we’re dealing with our favorite brands. One of the reasons you trust your closest friends is that they know you inside and out. This intimacy is a product of a physical world. It comes from sharing the same space with people.

In the virtual world, we expect the brands we know and love to have this same knowledge of us. It frustrates us when we are treated like a stranger. Think of how you would react if the people you love the most gave you the same treatment.

This jury-rigging of our personal relationship machinery to do double duty for the way we deal with brands may sound far-fetched, but marketing brands have only been around for a few hundred years. That is just not enough time for us to evolve new mechanisms to deal with them.

Yes, the rational, “slow loop” part of our brains can understand brands, but the “fast loop” has no “brand” or “data privacy” modules. It has no choice but to use the functional parts it does have.

As I mentioned in a previous post, there are multiple studies that indicate that it’s these parts of our brain that fire instantly, setting the stage for all the rationalization that will follow. And, as our own neuro-imaging study showed, it seems that the brain treats brands the same way it treats people.

I’ve been watching this intersection between technology and human behaviour for a long time now. More often than not, I see this tendency of the brain to make split-section decisions in environments where it just doesn’t have the proper equipment to make those decisions. When we stop to think about these things, we believe we understand them. And we do, but we had to stop to think. In the vast majority of cases, that’s just not how the brain works.

The Privacy War Has Begun

It started innocently enough….

My iPhone just upgraded itself to iOS 14.6, and the privacy protection purge began.

In late April,  Apple added App Tracking Transparency (ATT) to iOS (actually in 14.5 but for reasons mentioned in this Forbes article, I hadn’t noticed the change until the most recent update). Now, whenever I launch an app that is part of the online ad ecosystem, I’m asked whether I want to share data to enable tracking. I always opt out.

These alerts have been generally benign. They reference benefits like “more relevant ads,” a “customized experience” and “helping to support us.” Some assume you’re opting in and opting out is a much more circuitous and time-consuming process. Most also avoid the words “tracking” and “privacy.” One referred to it in these terms: “Would you allow us to refer to your activity?”

My answer is always no. Why would I want to customize an annoyance and make it more relevant?

All in all, it’s a deceptively innocent wrapper to put on what will prove to be a cataclysmic event in the world of online advertising. No wonder Facebook is fighting it tooth and nail, as I noted in a recent post.

This shot across the bow of online advertising marks an important turning point for privacy. It’s the first time that someone has put users ahead of advertisers. Everything up to now has been lip service from the likes of Facebook, telling us we have complete control over our privacy while knowing that actually protecting that privacy would be so time-consuming and convoluted that the vast majority of us would do nothing, thus keeping its profitability flowing through the pipeline.

The simple fact of the matter is that without its ability to micro-target, online advertising just isn’t that effective. Take away the personal data, and online ads are pretty non-engaging. Also, given our continually improving ability to filter out anything that’s not directly relevant to whatever we’re doing at the time, these ads are very easy to ignore.

Advertisers need that personal data to stand any chance of piercing our non-attentiveness long enough to get a conversion. It’s always been a crapshoot, but Apple’s ATT just stacked the odds very much against the advertiser.

It’s about time. Facebook and online ad platforms have had little to no real pushback against the creeping invasion of our privacy for years now. We have no idea how extensive and invasive this tracking has been. The only inkling we get is when the targeting nails the ad delivery so well that we swear our phone is listening to our conversations. And, in a way, it is. We are constantly under surveillance.

In addition to Facebook’s histrionic bitching about Apple’s ATT, others have started to find workarounds, as reported on 9 to 5 Mac. ATT specifically targets the IDFA (Identified for Advertisers), which offers cross app tracking by a unique identifier. Chinese ad networks backed by the state-endorsed Chinese Advertising Association were encouraging the adoption of CAID identifiers as an alternative to IDFA. Apple has gone on record as saying ATT will be globally implemented and enforced. While CAID can’t be policed at the OS level, Apple has said that apps that track users without their consent by any means, including CAID, could be removed from the App Store.

We’ll see. Apple doesn’t have a very consistent track record with it comes to holding the line against Chinese app providers. WeChat, for one, has been granted exceptions to Apple’s developer restrictions that have not been extended to anyone else.

For its part, Google has taken a tentative step toward following Apple’s lead with its new privacy initiative on Android devices, as reported in Slash Gear. Google Play has asked developers to share what data they collect and how they use that data. At this point, they won’t be requiring opt-in prompts as Apple does.

All of this marks a beginning. If it continues, it will throw a Kong-sized monkey wrench into the works of online advertising. The entire ecosystem is built on ad-supported models that depend on collecting and storing user data. Apple has begun nibbling away at that foundation.

The toppling has begun.

The Profitability Of Trust

Some weeks ago, I wrote about the crisis of trust identified by the Edelman Trust Barometer study and its impact on brands. In that post, I said that the trust in all institutions had been blown apart, hoisted on the petard of our political divides.

We don’t trust our government. We definitely don’t trust the media – especially the media that sits on the other side of the divide. Weirdly, our trust in NGOs has also slipped, perhaps because we suspect them to be politically motivated.

So whom — or what — do we trust? Well, apparently, we still trust corporations. We trust the brands we know. They, alone, seem to have been able to stand astride the chasm that is splitting our culture.

As I said before, I’m worried about that.

Now, I don’t doubt there are well-intentioned companies out there. I know there are several of them. But there is something inherent in the DNA of a for-profit company that I feel makes it difficult to trust them. And that something was summed up years ago by economist Milton Friedman, in what is now known as the Friedman Doctrine. 

In his eponymously named doctrine, Friedman says that a corporation should only have one purpose: “An entity’s greatest responsibility lies in the satisfaction of the shareholders.” The corporation should, therefore, always endeavor to maximize its revenues to increase returns for the shareholders.

So, a business will be trustworthy as long as fits its financial interest to be trustworthy. But what happens when those two things come into conflict, as they inevitably will?

Why is it inevitable, you ask? Why can’t a company be profitable and worthy of our trust? Ah, that’s where, sooner or later, the inevitable conflict will come.

Let’s strip this down to the basics with a thought experiment.

In a 2017 article in the Harvard Business Review, neuroscientist Paul J. Zak talks about the neuroscience of trust. He explains how he discovered that oxytocin is the neurochemical basis of trust — what he has since called The Trust Molecule.

To do this, he set up a classic trust task borrowed from Nobel laureate economist Vernon Smith:

“In our experiment, a participant chooses an amount of money to send to a stranger via computer, knowing that the money will triple in amount and understanding that the recipient may or may not share the spoils. Therein lies the conflict: The recipient can either keep all the cash or be trustworthy and share it with the sender.”

The choice of this task speaks volumes. It also lays bare the inherent conflict that sooner or later will face all corporations: money or trust? This is especially true of companies that have shareholders. Our entire capitalist ethos is built on the foundation of the Friedman Doctrine. Imagine what those shareholders will say when given the choice outlined in Zak’s experiment: “Keep the money, screw the trust.” Sometimes, you can’t have both. Especially when you have a quarterly earnings target to hit.

For humans, trust is our default position. It has been shown through game theory research using the Prisoner’s Dilemma that the best strategy for evolutionary success is one called “Tit for Tat.” In Tit for Tat, our opening position is typically one of trust and cooperation. But if we’re taken advantage of, then we raise our defences and respond in kind.

So, when we look at the neurological basis of trust, consistency is another requirement. We will be willing to trust a brand until it gives a reason not to. The more reliable the brand is in earning that trust, the more embedded that trust will become. As I said in the previous post, consistency builds beliefs and once beliefs are formed, it’s difficult to shake them loose.

Trying to thread this needle between trust and profitability can become an exercise in marketing “spin”: telling your customers you’re trustworthy, while you’re are doing everything possible to maximize your profits. A case in point — which we’ve seen repeatedly — is Facebook and its increasingly transparent efforts to maximize advertising revenue while gently whispering in our ear that we should trust it with our most private information.

Given the potential conflict between trust and profit, is trusting a corporation a lost cause? No, but it does put a huge amount of responsibility on the customer. The Edelman study has made abundantly clear that if there is such a thing as a “market” for trust, then trust is in dangerously short supply. This is why we’re turning to brands and for-profit corporations as a place to put our trust. We have built a society where we believe that’s the only thing we can trust.

Mark Carney, the governor of the Bank of England and the former governor of the Bank of Canada, puts this idea forward in his new book, “Value(s).” In it, he shows how “market economies” have evolved into “market societies” where price determines the value of everything. And corporations will follow profit, wherever it leads.

If we understand that fundamental characteristic of corporations, it does bring an odd kind of power that rests in the hands of consumers.

Markets are not unilateral beasts. They rely on the balance between supply and demand. We form half that equation. It is our willingness to buy that determine how prices are determined in Carney’s “market societies.” So, if we are willing to place our trust in a brand, we can also demand that the brand proves that our trust has not been misplaced, through the rewards and penalties built into the market. 

Essentially, we have to make trust profitable.