The Terrors of New Technology

My neighbour just got a new car. And he is terrified. He told me so yesterday. He has no idea how the hell to use it. This isn’t just a new car. It’s a massive learning project that can intimidate the hell out of anyone. It’s technology run amok. It’s the canary in the coal mine of the new world we’re building.

Perhaps – just perhaps – we should be more careful in what we wish for.

Let me provide the back story. His last car was his retirement present to himself, which he bought in 2000. He loved the car. It was a hard top convertible. At the time he bought it it was state of the art. But this was well before the Internet of Things and connected technology. The car did pretty much what you expected it to. Almost anyone could get behind the wheel and figure out how to make it go.

This year, under much prompting from his son, he finally decided to sell his beloved convertible and get a new car. But this isn’t just any car. It is a high-end electric sports car. Again, it is top of the line. And it is connected in pretty much every way you could imagine, and in many ways that would never cross any of our minds.

My neighbour has had this new car for about a week. And he’s still afraid to drive it anywhere. “Gord,” he said, “the thing terrifies me. I still haven’t figured out how to get it to open my garage door.” He has done online tutorials. He has set up a Zoom session with the dealer to help him navigate the umpteen zillion screens that show up on the smart display. After several frustrating experiments, he has learned he needs to pair it with his wifi system at home to get it to recharge properly. No one could just hop behind the wheel and drive it. You would have to sign up for an intensive technology boot camp before you were ready to climb a near-vertical learning curve. The capabilities of this car are mind boggling. And that’s exactly the problem. It’s damned near impossible to do anything with a boggled mind.

The acceptance of new technology has generated a vast body of research. I myself did an exhaustive series of blog posts on it back in 2014. Ever since sociologist Everett Rogers did his seminal work on the topic back in 1962 we have known that there are hurdles to overcome in grappling with something new, and we don’t all clear the hurdles at the same rate. Some of us never clear them at all.

But I also suspect that the market, especially at the high end, have become so enamored with embedding technology that they have forgotten how difficult it might be for some of us to adopt that technology, especially those of us of a certain age.

I am and always have been an early adopter. I geek out on new technology. That’s probably why my neighbour has tapped me to help him figure out his new car. I’m the guy my family calls when they can’t get their new smartphone to work. And I don’t mind admitting I’m slipping behind. I think we’re all the proverbial frogs in boiling water. And that water is technology. It’s getting harder and harder just to use the new shit we buy.

Here’s another thing that drives me batty about technology. It’s a constantly moving target. Once you learn something, it doesn’t stay learnt. It upgrades itself, changes platforms or becomes obsolete. Then you have to start all over again.

Last year, I started retrofitting our home to be a little bit more smart. And in the space of that year, I have sensors that mysteriously go offline, hubs that suddenly stop working, automation routines that are moodier than a hormonal teenager and a lot of stuff that just fits into the “I have no idea” category. When it all works it’s brilliant. I remember that one day – it was special. The other 364 have been a pain in the ass of varying intensity. And that’s for me, the tech guy. My wife sometimes feels like a prisoner in her own home. She has little appreciation for the mysterious gifts of technology that allow me to turn on our kitchen lights when we’re in Timbuktu (should we ever go there and if we can find a good wifi signal).

Technology should be a tool. It should serve us, not hold us slave to its whims. It would be so nice to be able to just make coffee from our new coffee maker, instead of spending a week trying to pair it with our toaster so breakfast is perfectly synchronized.

Oops, got to go. My neighbour’s car has locked him in his garage.

Moving Beyond Willful Ignorance

This is not the post I thought I’d be writing today. Two weeks ago, when I started to try to understand willful ignorance, I was mad. I suspect I shared that feeling with many of you. I was tired of the deliberate denial of fact that had consequences for all of us. I was frustrated with anti-masking, anti-vaxxing, anti-climate change and, most of all, anti-science. I was ready to go to war with those I saw in the other camp.

And that, I found out, is exactly the problem. Let me explain.

First, to recap. As I talked about two weeks ago, willful ignorance is a decision based on beliefs, so it’s very difficult – if not impossible – to argue, cajole or inform people out of it. And, as I wrote last week, willful ignorance has some very real and damaging consequences. This post was supposed to talk about what we do about that problem. I intended to find ways to isolate the impact of willful ignorance and minimize its downside. In doing so, I was going to suggest putting up even more walls to separate “us” from “them.”

But the more I researched this and thought about it, the more I realized that that was exactly the wrong approach. Because this recent plague of willful ignorance is many things, but – most of all – it’s one more example of how we love to separate “us” from “them.” And both sides, including mine, are equally guilty of doing this. The problem we have to solve here is not so much to change the way that some people process information (or don’t) in a way we may not agree with. What we have to fix is a monumental breakdown of trust.

Beliefs thrive in a vacuum. In a vacuum, there’s nothing to challenge them. And we have all been forced into a kind of ideological vacuum for the past year and a half. I talked about how our physical world creates a more heterogeneous ideological landscape than our virtual world does. In a normal life, we are constantly rubbing elbows with those of all leanings. And, if we want to function in that life, we have to find a way to get along with them, even if we don’t like them or agree with them. For most of us, that natural and temporary social bonding is something we haven’t had to do much lately.

It’s this lowering of our ideological defence systems that starts to bridge the gaps between us and them. And it also starts pumping oxygen into our ideological vacuums, prying the lids off our air-tight belief systems. It might not have a huge impact, but this doesn’t require a huge impact. A little trust can go a long way.

After World War II, psychologists and sociologists started to pick apart a fundamental question – how did our world go to war with itself? How, in the name of humanity, did the atrocities of the war occur? One of the areas they started to explore with vigour was this fundamental need of humans to sort ourselves into the categories of “us” and “them”.

In the 1970’s, psychologist Henri Tajfel found that we barely need a nudge to start creating in-groups and out-groups. We’ll do it for anything, even something as trivial as which abstract artist, Klee or Kandisky, we prefer. Once sorted on the flimsiest of premises, these groups started showing a strong preference to favour their own group and punish the other. There was no pre-existing animosity between the groups, but in games such as the Banker’s Game, they showed that they would even forego rewards for themselves if it meant depriving the other group of their share.

If we do this for completely arbitrary reasons such as those used by Tajfel, imagine how nasty we can get when the stakes are much higher, such as our own health or the future of the planet.

So, if we naturally sort ourselves into in groups and out groups and our likelihood to consider perspectives other than our own increases the more we’re exposed to those perspectives in a non-hostile environment, how do we start taking down those walls?

Here’s where it gets interesting.

What we need to break down the walls between “us” and “them” is to find another “them” that we can then unite against.

One of the theories about why the US is so polarized now is that with the end of the Cold War, the US lost a common enemy that united “us” in opposition to “them”. Without the USSR, our naturally tendency to categorize ourselves into ingroups and outgroups had no option but to turn inwards. You might think this is hogwash, but before you throw me into the “them” camp, let me tell you about what happened in Robbers Cave State Park in Oklahoma.

One of the experiments into this ingroup/outgroup phenomenon was conducted by psychologist Muzafer Sherif in the summer of 1954. He and his associates took 22 boys of similar backgrounds (ie they were all white, Protestant and had two parents) to a summer camp at Robbers Cave and randomly divided them into two groups. First, they built team loyalty and then they gradually introduced a competitive environment between the two groups. Predictably, animosity and prejudice soon developed between them.

Sherif and his assistants then introduced a four-day cooling off period and then tried to reduce conflict by mixing the two groups. It didn’t work. In fact, it just made things worse. Things didn’t improve until the two groups were brought together to overcome a common obstacle when the experimenters purposely sabotaged the camp’s water supply. Suddenly, the two groups came together to overcome a bigger challenge. This, by the way, is exactly the same theory behind the process that NASA and Amazon’s Blue Origin uses to build trust in their flight crews.

As I said, when I started this journey, I was squarely in the “us” vs “them” camp. And – to be honest – I’m still fighting my instinct to stay there. But I don’t think that’s the best way forward. I’m hoping that as our world inches towards a better state of normal, everyday life will start to force the camps together and our evolved instincts for cooperation will start to reassert themselves.

I also believe that the past 19 months (and counting) will be a period that sociologists and psychologists will study for years to come, as it’s been an ongoing experiment in human behavior at a scope that may never happen again.

We can certainly hope so.

Why Is Willful Ignorance More Dangerous Now?

In last week’s post, I talked about how the presence of willful ignorance is becoming something we not only have to accept, but also learn how to deal with. In that post, I intimated that the stakes are higher than ever, because willful ignorance can do real damage to our society and our world.

So, if we’ve lived with willful ignorance for our entire history, why is it now especially dangerous? I suspect it’s not so much that willful ignorance has changed, but rather the environment in which we find it.

The world we live in is more complex because it is more connected. But there are two sides to this connection, one in which we’re more connected, and one where we’re further apart than ever before.

Technology Connects Us…

Our world and our society are made of networks. And when it comes to our society, connection creates networks that are more interdependent, leading to complex behaviors and non-linear effects.

We must also realize that our rate of connection is accelerating. The pace of technology has always been governed by Moore’s Law, the tenet that the speed and capability of our computers will double every two years. For almost 60 years, this law has been surprisingly accurate.

What this has meant for our ability to connect digitally is that the number and impact of our connections has also increased exponentially, and it will continue to increase in our future. This creates a much denser and more interconnected network, but it has also created a network that overcomes the naturally self-regulating effects of distance.

For the first time, we can have strong and influential connections with others on the other side of the globe. And, as we forge more connections through technology, we are starting to rely less on our physical connections.

And Drives Us Further Apart

The wear and tear of a life spent bumping into each other in a physical setting tends to smooth out our rougher ideological edges. In face-to-face settings, most of us are willing to moderate our own personal beliefs in order to conform to the rest of the crowd. Exactly 80 years ago, psychologist Solomon Asch showed how willing we were to ignore the evidence of our own eyes in order to conform to the majority opinion of a crowd.

For the vast majority of our history, physical proximity has forced social conformity upon us. It leavens out our own belief structure in order to keep the peace with those closest to us, fulfilling one of our strongest evolutionary urges.

But, thanks to technology, that’s also changing. We are spending more time physically separated but technically connected. Our social conformity mechanisms are being short-circuited by filter bubbles where everyone seems to share our beliefs. This creates something called an availability bias:  the things we see coming through our social media feeds forms our view of what the world must be like, even though statistically it is not representative of reality.

It gives the willfully ignorant the illusion that everyone agrees with them — or, at least, enough people agree with them that it overcomes the urge to conform to the majority opinion.

Ignorance in a Chaotic World

These two things make our world increasingly fragile and subject to what chaos theorists call the Butterfly Effect, where seemingly small things can make massive differences.

It’s this unique nature of our world, which is connected in ways it never has been before, that creates at least three reasons why willful ignorance is now more dangerous than ever:

One: The impact of ignorance can be quickly amplified through social media, causing a Butterfly Effect cascade. Case in point, the falsehood that the U.S. election results weren’t valid, leading to the Capitol insurrection of Jan. 6.

The mechanics of social media that led to this issue are many, and I have cataloged most of them in previous columns: the nastiness that comes from arm’s-length discourse, a rewiring of our morality, and the impact of filter bubbles on our collective thresholds governing anti-social behaviors.

Secondly, and what is probably a bigger cause for concern, the willfully ignorant are very easily consolidated into a power base for politicians willing to play to their beliefs. The far right — and, to a somewhat lesser extent, the far left — has learned this to devastating impact. All you have to do is abandon your predilection for telling the truth so you can help them rationalize their deliberate denial of facts. Do this and you have tribal support that is almost impossible to shake.

The move of populist politicians to use the willfully ignorant as a launch pad for their own purposes further amplifies the Butterfly Effect, ensuring that the previously unimaginable will continue to be the new state of normal.

Finally, there is the third factor: our expanding impact on the physical world. It’s not just our degree of connection that technology is changing exponentially. It’s also the degree of impact we have on our physical world.

For almost our entire time on earth, the world has made us. We have evolved to survive in our physical environment, where we have been subject to the whims of nature.

But now, increasingly, we humans are shaping the nature of the world we live in. Our footprint has an ever-increasing impact on our environment, and that footprint is also increasing exponentially, thanks to technology.

The earth and our ability to survive on it are — unfortunately — now dependent on our stewardship. And that stewardship is particularly susceptible to the impact of willful ignorance. In the area of climate change alone, willful ignorance could — and has — led to events with massive consequences. A recent study estimates that climate change is directly responsible for 5 million deaths a year.

For all these reasons, willful ignorance is now something that can have life and death consequences.

Making Sense of Willful Ignorance

Willful ignorance is nothing new. Depending on your beliefs, you could say it was willful ignorance that got Adam and Eve kicked out of the Garden of Eden. But the visibility of it is higher than it’s ever been before. In the past couple of years, we have had a convergence of factors that has pushed willful ignorance to the surface — a perfect storm of fact denial.

Some of those effects include the social media effect, the erosion of traditional journalism and a global health crisis that has us all focusing on the same issue at the same time. The net result of all this is that we all have a very personal interest in the degree of ignorance prevalent in our society.

In one very twisted way, this may be a good thing. As I said, the willfully ignorant have always been with us. But we’ve always been able to shrug and move on, muttering “stupid is as stupid does.”

Now, however, the stakes are getting higher. Our world and society are at a point where willful ignorance can inflict some real and substantial damage. We need to take it seriously and we must start thinking about how to limit its impact.

So, for myself, I’m going to spend some time understanding willful ignorance. Feel free to come along for the ride!

It’s important to understand that willful ignorance is not the same as being stupid — or even just being ignorant, despite thousands of social media memes to the contrary.

Ignorance is one thing. It means we don’t know something. And sometimes, that’s not our fault. We don’t know what we don’t know. But willful ignorance is something very different. It is us choosing not to know something.

For example, I know many smart people who have chosen not to get vaccinated. Their reasons may vary. I suspect fear is a common denominator, and there is no shame in that. But rather than seek information to allay their fears, these folks have doubled down on beliefs based on little to no evidence. They have made a choice to ignore the information that is freely available.

And that’s doubly ironic, because the very same technology that enables willful ignorance has made more information available than ever before.

Willful ignorance is defined as “a decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.”

And this is where the problem lies. The explosion of content has meant there is always information available to support any point of view. We also have the breakdown of journalistic principles that occurred in the past 40 years. Combined, we have a dangerous world of information that has been deliberately falsified in order to appeal to a segment of the population that has chosen to be willfully ignorant.

It seems a contradiction: The more information we have, the more that ignorance is a problem. But to understand why, we have to understand how we make sense of the world.

Making Sense of Our World

Sensemaking is a concept that was first introduced by organizational theorist Karl Weick in the 1970s. The concept has been borrowed by those working in the areas of machine learning and artificial intelligence. At the risk of oversimplification, it provides us a model to help us understand how we “give meaning to our collective experiences.”

D.T. Moore and R. Hoffman, 2011

The above diagram (from a 2011 paper by David T. Moore and Robert R. Hoffman) shows the sensemaking process. It starts with a frame — our understanding of what is true about the world. As we get presented with new data, we have to make a choice: Does it fit our frame or doesn’t it?

If it does, we preserve the frame and may elaborate on it, fitting the new data into it. If the data doesn’t support our existing frame, we then have to reframe, building a new frame from scratch.

Our brains loves frames. It’s much less work for the brain to keep a frame than to build a new one. That’s why we tend to stick with our beliefs — another word for a frame — until we’re forced to discard them.

But, as with all human traits, our ways of making sense of our world vary in the population. In the above diagram, some of us are more apt to spend time on the right side of the diagram, more open to reframing and always open to evidence that may cause us to reframe.

That, by the way, is exactly how science is supposed to work. We refer to this capacity as critical thinking: the objective analysis and evaluation of  data in order to form a judgment, even if it causes us to have to build a new frame.

Others hold onto their frames for dear life. They go out of their way to ignore data that may cause them to have to discard the frames they hold. This is what I would define as willful ignorance.

It’s misleading to think of this as just being ignorant. That would simply indicate a lack of available data. It’s also misleading to attribute this to a lack of intelligence.

That would be an inability to process the data. With willful ignorance, we’re not talking about either of those things. We are talking about a conscious and deliberate decision to ignore available data. And I don’t believe you can fix that.

We fall into the trap of thinking we can educate, shame or argue people out of being willfully ignorant. We can’t. This post is not intended for the willfully ignorant. They have already ignored it. This is just the way their brains work. It’s part of who they are. Wishing they weren’t this way is about as pointless as wishing they were a world-class pole vaulter, that they were seven feet tall or that their brown eyes were blue.

We have to accept that this situation is not going to change. And that’s what we have to start thinking about. Given that we have willful ignorance in the world, what can we do to minimize its impact?

Adrift in the Metaverse

Humans are nothing if not chasers of bright, shiny objects. Our attention is always focused beyond the here and now. That is especially true when here and now is a bit of a dumpster fire.

The ultrarich know that this is part of the human psyche, and they are doubling down their bets on it. Jeff Bezos and Elon Musk are betting on space. But others — including Mark Zuckerberg — are betting on something called the metaverse.

Just this past summer, Zuck told his employees about his master plan for Facebook:

“Our overarching goal across all of (our) initiatives is to help bring the metaverse to life.”

So what exactly is the metaverse? According to Wikipedia, it is

“a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the Internet.”

The metaverse is a world of our own making, which exists in the dimensions of a digital reality. There we imagine we can fix what we screwed up in the maddeningly unpredictable real world. It is the ultimate in bright, shiny objects.

Science fiction and the entertainment industry have been toying with the idea of the metaverse for some time now. The term itself comes from Neal Stephenson’s 1992 novel “Snow Crash.” It has been given the Hollywood treatment numerous times, notably in “The Matrix” and “Ready Player One.” But Silicon Valley venture capitalists are rushing to make fiction into fact.

You can’t really blame us for throwing in the towel on the world we have systematically wrecked. There are few glimmers of hope out there in the real world. What we have wrought is painful to contemplate. So we are doing what we’ve always done, reach for what we want rather than fix what we have. Take the Reporters Without Borders Uncensored Library, for example.

There are many places in the real world where journalism is censored, like Russia, the Middle East, Vietnam and China. But in the metaverse, there is the option of leapfrogging over all the political hurdles we stumble over in the real world. So Reporters without Borders and two German creative agencies built a meta library in the meta world of Minecraft. Here, censored articles are made into virtual books, accessible to all who want to check them out.

It’s hard to find fault with this. Censorship is a tool of oppression. Here, a virtual world offered an inviting loophole to circumvent it. The metaverse came to the rescue. What is the problem with that?

The biggest risk is this: We weren’t built for the metaverse. We can probably adapt to it, somewhat, but everything that makes us tick has evolved in a flesh and blood world, and — to quote a line from Joni Mitchell’s “Big Yellow Taxi,” “You don’t know what you’ve got till it’s gone.”

It’s fair to say that right now the metaverse is a novelty. Most of your neighbors, friends and family have never heard of it. But odds are it will become our life. In a 2019 article called “Welcome to the Mirror World” in Wired, Kevin Kelley explained, “we are building a 1-to-1 map of almost unimaginable scope. When it’s complete, our physical reality will merge with the digital universe.”

In a Forbes article, futurist Cathy Hackl gives us an example of what this merger might look like:

“Imagine walking down the street. Suddenly, you think of a product you need. Immediately next to you, a vending machine appears, filled with the product and variations you were thinking of. You stop, pick an item from the vending machine, it’s shipped to your house, and then continue on your way.”

That sounds benign — even helpful. But if we’ve learned one thing it’s this: When we try to merge technology with human behavior, there are always unintended consequences that arise. And when we’re talking about the metaverse, those consequences will likely be massive.

It is hubristic in the extreme to imagine we can engineer a world that will be a better match for our evolved humanware mechanics than the world we actually evolved within. It’s sheer arrogance to imagine we can build that world, and also arrogant to imagine that we can thrive within it.

We have a bright, shiny bias built into us that will likely lead us to ignore the crumbling edifice of our reality. German futurist Gerd Leonhard, for one, warns us about an impending collision between technology and humanity:

“Technology is not what we seek but how we seek: the tools should not become the purpose. Yet increasingly, technology is leading us to ‘forget ourselves.’”

Imagine a Pandemic without Technology

As the writer of a weekly post that tends to look at the intersection between human behavior and technology, the past 18 months have been interesting – and by interesting, I mean a twisted ride through gut-wrenching change unlike anything I have ever seen before.

I can’t even narrow it down to 18 months. Before that, there was plenty more that was “unprecedented” – to berrypick a word from my post from a few weeks back. I have now been writing for MediaPost in one place or another for 17 years. My very first post was on August 19, 2004. That was 829 posts ago. If you add the additional posts I’ve done for my own blog – outofmygord.com – I’ve just ticked over 1,100 on my odometer.  That’s a lot of soul searching about technology. And the last several months have still been in a class by themselves.

Now, part of this might be where my own head is at. Believe it or not, I do sometimes try to write something positive. But as soon as my fingers hit the keyboard, things seem to spiral downwards. Every path I take seems to take me somewhere dark. There has been precious little that has sparked optimism in my soul.

Today, for example, prior to writing this, I took three passes at writing something else. Each quickly took a swerve towards impending doom. I’m getting very tired of this. I can only imagine how you feel, reading it.

So I finally decided to try a thought experiment. “What if,” I wondered, “we had gone through the past 17 months without the technology we take for granted? What if there was no Internet, no computers, no mobile devices? What if we had lived through the Pandemic with only the technology we had – say – a hundred years ago, during the global pandemic of the Spanish Flu starting in 1918? Perhaps the best way to determine the sum total contribution of technology is to do it by process of elimination.”

The Cons

Let’s get the negatives out of the way. First, you might say that technology enabled the flood of misinformation and conspiracy theorizing that has been so top-of-mind for us. Well, yes – and no.

Distrust in authority is nothing new. It’s always been there, at one end of a bell curve that spans the attitudes of our society. And nothing brings the outliers of society into global focus faster than a crisis that affects all of us.

There was public pushback against the very first vaccine ever invented; the smallpox vaccine. Now granted, the early method was to rub puss from a cowpox blister into a cut in your skin and hope for the best. But it worked. Smallpox is now a thing of the past.

And, if we are talking about pushback against public health measures, that’s nothing new either. Exactly the same thing happened during the 1918-1919 Pandemic. Here’s one eerily familiar excerpt from a journal article looking at the issue, “Public-gathering bans also exposed tensions about what constituted essential vs. unessential activities. Those forced to close their facilities complained about those allowed to stay open. For example, in New Orleans, municipal public health authorities closed churches but not stores, prompting a protest from one of the city’s Roman Catholic priests.”

What is different, thanks to technology, is that public resistance is so much more apparent than it’s ever been before. And that resistance is coming with faces and names we know attached. People are posting opinions on social media that they would probably never say to you in a face-to-face setting, especially if they knew you disagreed with them. Our public and private discourse is now held at arms-length by technology. Gone are all the moderating effects that come with sharing the same physical space.

The Pros

Try as I might, I couldn’t think of another “con” that technology has brought to the past 17 months. The “pro” list, however, is far too long to cover in this post, so I’ll just mention a few that come immediately to mind.

Let’s begin with the counterpoint to the before-mentioned “Con” – the misinformation factor. While misinformation was definitely spread over the past year and a half, so was reliable, factual information. And for those willing to pay attention to it, it enabled us to find out what we needed to in order to practice public health measures at a speed previously unimagined. Without technology, we would have been slower to act and – perhaps – fewer of us would have acted at all. At worst, in this case technology probably nets out to zero.

But technology also enabled the world to keep functioning, even if it was in a different form. Working from home would have been impossible without it. Commercial engines kept chugging along. Business meetings switched to online platforms. The Dow Jones Industrial Average, as of the writing of this, is over 20% higher than it was before the pandemic. In contrast, if you look at stock market performance over the 1918 – 1919 pandemic, the stock market was almost 32% lower at the end of the third wave as it was at the start of the first. Of course, there are other factors to consider, but I suspect we can thank technology for at least some of that.

It’s easy to point to the negatives that technology brings, but if you consider it as a whole, technology is overwhelmingly a blessing.

What was interesting to me in this thought experiment was how apparent it was that technology keeps the cogs of our society functioning more effectively, but if there is a price to be paid, it typically comes at the cost of our social bonds.

Why is Everything Now ‘Unprecedented’?

Just once, I would like to get through one day without hearing the word “unprecedented.” And I wonder, is that just the media trying to get a click, or is the world truly that terrible?

Take the Olympics. In my lifetime, I’ve never seen an Olympics like this one. Empty stands. Athletes having to leave within 48 hours of their last event. Opening and closing ceremonies unlike anything we have ever seen. It’s, well — unprecedented.

The weather is unprecedented. What is happening in politics is unprecedented. The pandemic is unprecedented, at least in our lifetimes. I don’t know about you, but I feel like I’m watching a blockbuster where the world will eventually end — but we just haven’t got to that part of the movie yet. I feel the palpable sensation of teetering on the edge of a precipice. And I’m pretty sure it’s happened before.

Take the lead-ups to the two world wars, for example. If you plot a timeline of the events that led to either July 20, 1914 or Sept. 1, 1939, there is a noticeable acceleration of momentum. At first, the points on the timeline are spread apart, giving the world a chance to once again catch its collective breath. But as we get closer and closer to those dates circled in red, things pick up. There are cascades of events that eventually lead to the crisis point. Are we in the middle of such a cascade?

Part of this might just be network knock-on effects that happen in complex environments. But I also wonder if we just become a little shell- shocked, being nudged into a numb acceptance of things we would have once found intolerable.

Author and geographer Jared Diamond calls this “creeping normality. “ In his book “Collapse: How Societies Choose to Fail or Succeed,” he used the example of the deforestation and environmental degradation that happened on Easter Island — and how, despite the impending doom, the natives still decided to chop down the last tree: “I suspect, though, that the disaster happened not with a bang but with a whimper. After all, there are those hundreds of abandoned statues to consider. The forest the islanders depended on for rollers and rope didn’t simply disappear one day—it vanished slowly, over decades.”

Creeping normality continually and imperceptibly nudges us from the unacceptable to the acceptable and we don’t even notice it’s happening. It’s a cognitive bias that keeps us from seeing reality for what it is. Creeping normality is what happens when our view of the world comes through an Overton Window.

I have mentioned the concept of the Overton Window before.  Overton Window was first introduced by political analyst Joseph Lehman and was named after his colleague, Joseph Overton. It was initially coined to show that the political policies that the public finds acceptable will shift over time. What was once considered unthinkable can eventually become acceptable or even popular, given the shifting sensitivities of the public. As an example, the antics of Donald Trump would once be considered unacceptable in any public venue — but as our reality shifted, we saw them eventually become mainstream from an American president.

I suspect that the media does the same thing with our perception of the world in general. The news media demands the exceptional. We don’t click on “ordinary.” So it consistently shifts our Overton Window of what we pay attention to, moving us toward the outrageous. Things that once would have caused riots are now greeted with a yawn. This is combined with the unrelenting pace of the news cycle. What was outrageous today slips into yesterday, to be replaced with what is outrageous today.

And while I’m talking about outrageous, let’s look at the root of that term. The whole point of something being outrageous is to prompt us into being outraged — or moved enough to take action. And, if our sensitivity to outrage is constantly being numbed, we are no longer moved enough to act.

When we become insensitive to things that are unprecedented, we’re in a bad place. Our trust in information is gone. We seek information that comforts us that the world is not as bad as we think it is. And we ignore the red flags we should be paying attention to.

If you look at the lead-ups to both world wars, you see this same pattern. Things that happened regularly in 1914 or 1939, just before the outbreak of war, would have been unimaginable just a few years earlier. The momentum of mayhem picked up as the world raced to follow a rapidly moving Overton Window. Soon, before we knew it, all hell broke loose and the world was left with only one alternative: going to war.

An Overton Window can just happen, or it can be intentionally planned. Politicians from the fringes, especially the right, have latched on to the Window, taking something intended to be an analysis and turning it into a strategy. They now routinely float “policy balloons” that they know are on the fringe, hoping to trigger a move in our Window to either the right or left. Over time, they can use this strategy to introduce legislation that would once have been vehemently rejected.

The danger in all this is the embedding of complacency. Ultimately, our willingness to take action against threat is all that keeps our society functioning. Whether it’s our health, our politics or our planet, we have to be moved to action before it’s too late.

When the last tree falls on Easter Island, we don’t want to be the ones with the axe in our hands.

Seeking “Burstiness” When Working from Home

I was first introduced to the concept of “burstiness” by psychologist Adam Grant in his podcast, “Worklife.” In one episode, he visits the writers’ room at “The Daily Show” and probes the creativity that crackles when those writers were on a roll. A big part of that energy, according to Grant, was because of “burstiness.”

The term was initially coined by Anita Williams Woolley, associate professor of organizational behavior and theory at Carnegie Mellon University.

Burstiness is, according to Grant,

“like the best moments in improv jazz. Someone plays a note, someone else jumps in with a harmony, and pretty soon, you have a collective sound that no one planned. Most groups never get to that point, but you know burstiness when you see it. At ‘The Daily Show,’ the room just literally sounds like it’s bursting with ideas.”

Last week, we reran a post I wrote at the beginning of the pandemic wondering if we might be forsaking some important elements of team effectiveness in our rush to embrace the virtual workplace. Our brains have evolved to be most effective in creating relationships with others when we’re face-to -ace. There is a rich bandwidth of communication through which we build trust in others that is reliant on physical proximity.

Zoom just doesn’t cut it.

So, would this idea of burstiness be sacrificed in a remote work environment? Let’s dig a little deeper.

Grant outlines the things that need to be in place for burstiness to occur:

  • Spending time with each other
  • Psychological safety
  • A proper balance of structure
  • The right people in the room

Let’s look at these in reverse order.

The right people in the room

First, how do you get the right people in the room – or, in the case of a remote workforce, on the same Zoom call? Here, diversity seems to be the key. You need different perspectives. Creativity comes from diversity, not sameness.

Dr. Woolley offers the example of the Kennedy and Lincoln presidential cabinets. Kennedy’s cabinet was comprised of Ivy League intellectual elites who all came from similar backgrounds and had the same ideological view of the world. Lincoln’s cabinet was fractious, to say the least. After his election, Lincoln reached out to bitter rivals who ran against him for the presidency — including Salmon Chase and William Seward — and gave them senior positions in his cabinet. Lincoln’s cabinet is generally considered by historians as the most effective political team in American history. Kennedy’s cabinet suffered from a debilitating case of “groupthink” that launched the Bay of Pigs invasion and almost ignited another world war.

There is no reason why a virtual workplace cannot embrace diversity. You just have to recruit the right people through bias-resistant practices like blind auditions and using multiple interviewers.

A proper balance of structure

Grant says the right structure provides the rules of engagement for creative bursts. You need some basic guidelines so you can focus on the work and not the mechanics of the process. To use Grant’s example, jazz improv seems unstructured, but there are actually some commonly understood ground rules on which the improvisation is built.

This brings to mind psychologist Mihaly Csikszentmihalyi’s concept of Flow, the condition where creativity just flows naturally. Structure allows Flow to happen by providing the structure the brain needs to focus wholly on the task at hand. There is no reason why the structure can’t apply equally to traditional and virtual work teams.

But the next two conditions get a little trickier for the virtual workplace. Let’s look at them together:

Psychological safety and spending time together

Psychological safety is a term coined by Harvard Business School professor Amy Edmondson. When it comes to promoting “burstiness,” psychology safety gives us the confidence to contribute without being punished or ridiculed. It allows us to take creative risk. Another word for it would be trust.

And that brings us to second part — spending time together — and the challenge for that in a virtual workplace. Trust is not built overnight, and it is not built over Zoom or Slack.

As I said in my previous post, organizational behavior specialist Mahdi Roghanizad from Ryerson University has found that the connections in our brains that create trust may not even be activated unless we’re face-to-face with someone. We need eye contact to nudge this part of ourselves into life.

So, if creativity is a requirement in the workplace, and connecting face-to-face is required to foster creativity, is a virtual office a non-starter? Not necessarily. In my next post, I’ll look at some ways we might have still be able to have burstiness — even when we’re at home in our pajamas.

Why Our Brains Struggle With The Threat Of Data Privacy

It seems contradictory. We don’t want to share our personal data but, according to a recent study reported on by MediaPost’s Laurie Sullivan, we want the brands we trust to know us when we come shopping. It seems paradoxical.

But it’s not — really.  It ties in with the way we’ve always been thinking.

Again, we just have to understand that we really don’t understand how the data ecosystem works — at least, not on an instant and intuitive level. Our brains have no evolved mechanisms that deal with new concepts like data privacy. So we have borrowed other parts of the brain that do exist. Evolutionary biologists call this “exaption.”

For example, the way we deal with brands seems to be the same way we deal with people — and we have tons of experience doing that. Some people we trust. Most people we don’t. For the people we trust, we have no problem sharing something of our selves. In fact, it’s exactly that sharing that nurtures relationships and helps them grow.

It’s different with people we don’t trust. Not only do we not share with them, we work to avoid them, putting physical distance between us and them. We’d cross to the other side of the street to avoid bumping into them.

In a world that was ordered and regulated by proximity, this worked remarkably well. Keeping our enemies at arm’s length generally kept us safe from harm.

Now, of course, distance doesn’t mean the same thing it used to. We now maneuver in a world of data, where proximity and distance have little impact. But our brains don’t know that.

As I said, the brain doesn’t really know how digital data ecosystems work, so it does its best to substitute concepts it has evolved to handle those it doesn’t understand at an intuitive level.

The proxy for distance the brain seems to use is task focus. If we’re trying to do something, everything related to that thing is “near” and everything not relevant to it is “far. But this is an imperfect proxy at best and an outright misleading one at worst.

For example, we will allow our data to be collected in order to complete the task. The task is “near.” In most cases, the data we share has little to do with the task we’re trying to accomplish. It is labelled by the brain as “far” and therefore poses no immediate threat.

It’s a bait and switch tactic that data harvesters have perfected. Our trust-warning systems are not engaged because there are no proximate signs to trigger them. Any potential breaches of trust happen well after the fact – if they happen at all. Most times, we’re simply not aware of where our data goes or what happens to it. All we know is that allowing that data to be collected takes us one step closer to accomplishing our task.

That’s what sometimes happens when we borrow one evolved trait to deal with a new situation:  The fit is not always perfect. Some aspects work, others don’t.

And that is exactly what is happening when we try to deal with the continual erosion of online trust. In the moment, our brain is trying to apply the same mechanisms it uses to assess trust in a physical world. What we don’t realize is that we’re missing the warning signs our brains have evolved to intuitively look for.

We also drag this evolved luggage with us when we’re dealing with our favorite brands. One of the reasons you trust your closest friends is that they know you inside and out. This intimacy is a product of a physical world. It comes from sharing the same space with people.

In the virtual world, we expect the brands we know and love to have this same knowledge of us. It frustrates us when we are treated like a stranger. Think of how you would react if the people you love the most gave you the same treatment.

This jury-rigging of our personal relationship machinery to do double duty for the way we deal with brands may sound far-fetched, but marketing brands have only been around for a few hundred years. That is just not enough time for us to evolve new mechanisms to deal with them.

Yes, the rational, “slow loop” part of our brains can understand brands, but the “fast loop” has no “brand” or “data privacy” modules. It has no choice but to use the functional parts it does have.

As I mentioned in a previous post, there are multiple studies that indicate that it’s these parts of our brain that fire instantly, setting the stage for all the rationalization that will follow. And, as our own neuro-imaging study showed, it seems that the brain treats brands the same way it treats people.

I’ve been watching this intersection between technology and human behaviour for a long time now. More often than not, I see this tendency of the brain to make split-section decisions in environments where it just doesn’t have the proper equipment to make those decisions. When we stop to think about these things, we believe we understand them. And we do, but we had to stop to think. In the vast majority of cases, that’s just not how the brain works.

The Privacy War Has Begun

It started innocently enough….

My iPhone just upgraded itself to iOS 14.6, and the privacy protection purge began.

In late April,  Apple added App Tracking Transparency (ATT) to iOS (actually in 14.5 but for reasons mentioned in this Forbes article, I hadn’t noticed the change until the most recent update). Now, whenever I launch an app that is part of the online ad ecosystem, I’m asked whether I want to share data to enable tracking. I always opt out.

These alerts have been generally benign. They reference benefits like “more relevant ads,” a “customized experience” and “helping to support us.” Some assume you’re opting in and opting out is a much more circuitous and time-consuming process. Most also avoid the words “tracking” and “privacy.” One referred to it in these terms: “Would you allow us to refer to your activity?”

My answer is always no. Why would I want to customize an annoyance and make it more relevant?

All in all, it’s a deceptively innocent wrapper to put on what will prove to be a cataclysmic event in the world of online advertising. No wonder Facebook is fighting it tooth and nail, as I noted in a recent post.

This shot across the bow of online advertising marks an important turning point for privacy. It’s the first time that someone has put users ahead of advertisers. Everything up to now has been lip service from the likes of Facebook, telling us we have complete control over our privacy while knowing that actually protecting that privacy would be so time-consuming and convoluted that the vast majority of us would do nothing, thus keeping its profitability flowing through the pipeline.

The simple fact of the matter is that without its ability to micro-target, online advertising just isn’t that effective. Take away the personal data, and online ads are pretty non-engaging. Also, given our continually improving ability to filter out anything that’s not directly relevant to whatever we’re doing at the time, these ads are very easy to ignore.

Advertisers need that personal data to stand any chance of piercing our non-attentiveness long enough to get a conversion. It’s always been a crapshoot, but Apple’s ATT just stacked the odds very much against the advertiser.

It’s about time. Facebook and online ad platforms have had little to no real pushback against the creeping invasion of our privacy for years now. We have no idea how extensive and invasive this tracking has been. The only inkling we get is when the targeting nails the ad delivery so well that we swear our phone is listening to our conversations. And, in a way, it is. We are constantly under surveillance.

In addition to Facebook’s histrionic bitching about Apple’s ATT, others have started to find workarounds, as reported on 9 to 5 Mac. ATT specifically targets the IDFA (Identified for Advertisers), which offers cross app tracking by a unique identifier. Chinese ad networks backed by the state-endorsed Chinese Advertising Association were encouraging the adoption of CAID identifiers as an alternative to IDFA. Apple has gone on record as saying ATT will be globally implemented and enforced. While CAID can’t be policed at the OS level, Apple has said that apps that track users without their consent by any means, including CAID, could be removed from the App Store.

We’ll see. Apple doesn’t have a very consistent track record with it comes to holding the line against Chinese app providers. WeChat, for one, has been granted exceptions to Apple’s developer restrictions that have not been extended to anyone else.

For its part, Google has taken a tentative step toward following Apple’s lead with its new privacy initiative on Android devices, as reported in Slash Gear. Google Play has asked developers to share what data they collect and how they use that data. At this point, they won’t be requiring opt-in prompts as Apple does.

All of this marks a beginning. If it continues, it will throw a Kong-sized monkey wrench into the works of online advertising. The entire ecosystem is built on ad-supported models that depend on collecting and storing user data. Apple has begun nibbling away at that foundation.

The toppling has begun.