Why Is Willful Ignorance More Dangerous Now?

In last week’s post, I talked about how the presence of willful ignorance is becoming something we not only have to accept, but also learn how to deal with. In that post, I intimated that the stakes are higher than ever, because willful ignorance can do real damage to our society and our world.

So, if we’ve lived with willful ignorance for our entire history, why is it now especially dangerous? I suspect it’s not so much that willful ignorance has changed, but rather the environment in which we find it.

The world we live in is more complex because it is more connected. But there are two sides to this connection, one in which we’re more connected, and one where we’re further apart than ever before.

Technology Connects Us…

Our world and our society are made of networks. And when it comes to our society, connection creates networks that are more interdependent, leading to complex behaviors and non-linear effects.

We must also realize that our rate of connection is accelerating. The pace of technology has always been governed by Moore’s Law, the tenet that the speed and capability of our computers will double every two years. For almost 60 years, this law has been surprisingly accurate.

What this has meant for our ability to connect digitally is that the number and impact of our connections has also increased exponentially, and it will continue to increase in our future. This creates a much denser and more interconnected network, but it has also created a network that overcomes the naturally self-regulating effects of distance.

For the first time, we can have strong and influential connections with others on the other side of the globe. And, as we forge more connections through technology, we are starting to rely less on our physical connections.

And Drives Us Further Apart

The wear and tear of a life spent bumping into each other in a physical setting tends to smooth out our rougher ideological edges. In face-to-face settings, most of us are willing to moderate our own personal beliefs in order to conform to the rest of the crowd. Exactly 80 years ago, psychologist Solomon Asch showed how willing we were to ignore the evidence of our own eyes in order to conform to the majority opinion of a crowd.

For the vast majority of our history, physical proximity has forced social conformity upon us. It leavens out our own belief structure in order to keep the peace with those closest to us, fulfilling one of our strongest evolutionary urges.

But, thanks to technology, that’s also changing. We are spending more time physically separated but technically connected. Our social conformity mechanisms are being short-circuited by filter bubbles where everyone seems to share our beliefs. This creates something called an availability bias:  the things we see coming through our social media feeds forms our view of what the world must be like, even though statistically it is not representative of reality.

It gives the willfully ignorant the illusion that everyone agrees with them — or, at least, enough people agree with them that it overcomes the urge to conform to the majority opinion.

Ignorance in a Chaotic World

These two things make our world increasingly fragile and subject to what chaos theorists call the Butterfly Effect, where seemingly small things can make massive differences.

It’s this unique nature of our world, which is connected in ways it never has been before, that creates at least three reasons why willful ignorance is now more dangerous than ever:

One: The impact of ignorance can be quickly amplified through social media, causing a Butterfly Effect cascade. Case in point, the falsehood that the U.S. election results weren’t valid, leading to the Capitol insurrection of Jan. 6.

The mechanics of social media that led to this issue are many, and I have cataloged most of them in previous columns: the nastiness that comes from arm’s-length discourse, a rewiring of our morality, and the impact of filter bubbles on our collective thresholds governing anti-social behaviors.

Secondly, and what is probably a bigger cause for concern, the willfully ignorant are very easily consolidated into a power base for politicians willing to play to their beliefs. The far right — and, to a somewhat lesser extent, the far left — has learned this to devastating impact. All you have to do is abandon your predilection for telling the truth so you can help them rationalize their deliberate denial of facts. Do this and you have tribal support that is almost impossible to shake.

The move of populist politicians to use the willfully ignorant as a launch pad for their own purposes further amplifies the Butterfly Effect, ensuring that the previously unimaginable will continue to be the new state of normal.

Finally, there is the third factor: our expanding impact on the physical world. It’s not just our degree of connection that technology is changing exponentially. It’s also the degree of impact we have on our physical world.

For almost our entire time on earth, the world has made us. We have evolved to survive in our physical environment, where we have been subject to the whims of nature.

But now, increasingly, we humans are shaping the nature of the world we live in. Our footprint has an ever-increasing impact on our environment, and that footprint is also increasing exponentially, thanks to technology.

The earth and our ability to survive on it are — unfortunately — now dependent on our stewardship. And that stewardship is particularly susceptible to the impact of willful ignorance. In the area of climate change alone, willful ignorance could — and has — led to events with massive consequences. A recent study estimates that climate change is directly responsible for 5 million deaths a year.

For all these reasons, willful ignorance is now something that can have life and death consequences.

Making Sense of Willful Ignorance

Willful ignorance is nothing new. Depending on your beliefs, you could say it was willful ignorance that got Adam and Eve kicked out of the Garden of Eden. But the visibility of it is higher than it’s ever been before. In the past couple of years, we have had a convergence of factors that has pushed willful ignorance to the surface — a perfect storm of fact denial.

Some of those effects include the social media effect, the erosion of traditional journalism and a global health crisis that has us all focusing on the same issue at the same time. The net result of all this is that we all have a very personal interest in the degree of ignorance prevalent in our society.

In one very twisted way, this may be a good thing. As I said, the willfully ignorant have always been with us. But we’ve always been able to shrug and move on, muttering “stupid is as stupid does.”

Now, however, the stakes are getting higher. Our world and society are at a point where willful ignorance can inflict some real and substantial damage. We need to take it seriously and we must start thinking about how to limit its impact.

So, for myself, I’m going to spend some time understanding willful ignorance. Feel free to come along for the ride!

It’s important to understand that willful ignorance is not the same as being stupid — or even just being ignorant, despite thousands of social media memes to the contrary.

Ignorance is one thing. It means we don’t know something. And sometimes, that’s not our fault. We don’t know what we don’t know. But willful ignorance is something very different. It is us choosing not to know something.

For example, I know many smart people who have chosen not to get vaccinated. Their reasons may vary. I suspect fear is a common denominator, and there is no shame in that. But rather than seek information to allay their fears, these folks have doubled down on beliefs based on little to no evidence. They have made a choice to ignore the information that is freely available.

And that’s doubly ironic, because the very same technology that enables willful ignorance has made more information available than ever before.

Willful ignorance is defined as “a decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.”

And this is where the problem lies. The explosion of content has meant there is always information available to support any point of view. We also have the breakdown of journalistic principles that occurred in the past 40 years. Combined, we have a dangerous world of information that has been deliberately falsified in order to appeal to a segment of the population that has chosen to be willfully ignorant.

It seems a contradiction: The more information we have, the more that ignorance is a problem. But to understand why, we have to understand how we make sense of the world.

Making Sense of Our World

Sensemaking is a concept that was first introduced by organizational theorist Karl Weick in the 1970s. The concept has been borrowed by those working in the areas of machine learning and artificial intelligence. At the risk of oversimplification, it provides us a model to help us understand how we “give meaning to our collective experiences.”

D.T. Moore and R. Hoffman, 2011

The above diagram (from a 2011 paper by David T. Moore and Robert R. Hoffman) shows the sensemaking process. It starts with a frame — our understanding of what is true about the world. As we get presented with new data, we have to make a choice: Does it fit our frame or doesn’t it?

If it does, we preserve the frame and may elaborate on it, fitting the new data into it. If the data doesn’t support our existing frame, we then have to reframe, building a new frame from scratch.

Our brains loves frames. It’s much less work for the brain to keep a frame than to build a new one. That’s why we tend to stick with our beliefs — another word for a frame — until we’re forced to discard them.

But, as with all human traits, our ways of making sense of our world vary in the population. In the above diagram, some of us are more apt to spend time on the right side of the diagram, more open to reframing and always open to evidence that may cause us to reframe.

That, by the way, is exactly how science is supposed to work. We refer to this capacity as critical thinking: the objective analysis and evaluation of  data in order to form a judgment, even if it causes us to have to build a new frame.

Others hold onto their frames for dear life. They go out of their way to ignore data that may cause them to have to discard the frames they hold. This is what I would define as willful ignorance.

It’s misleading to think of this as just being ignorant. That would simply indicate a lack of available data. It’s also misleading to attribute this to a lack of intelligence.

That would be an inability to process the data. With willful ignorance, we’re not talking about either of those things. We are talking about a conscious and deliberate decision to ignore available data. And I don’t believe you can fix that.

We fall into the trap of thinking we can educate, shame or argue people out of being willfully ignorant. We can’t. This post is not intended for the willfully ignorant. They have already ignored it. This is just the way their brains work. It’s part of who they are. Wishing they weren’t this way is about as pointless as wishing they were a world-class pole vaulter, that they were seven feet tall or that their brown eyes were blue.

We have to accept that this situation is not going to change. And that’s what we have to start thinking about. Given that we have willful ignorance in the world, what can we do to minimize its impact?

Respecting The Perspective Of Generations

We spend most of our time talking to people who are approximately our age. Our social circle naturally forms from those who were born in the same era as us. We just have a lot more in common with them. And that may not be a good thing. I just turned 60, and one of the things I’m spending more time doing is speaking to people in the generation before me and the generation after me.

Each of us become products of the environment where we grew up. It gives us a perspective that shapes the reality we live in, for good or bad. Sometimes that causes frustrations when we interact with those who grew up in a different generation. We just don’t see the world the same way.

And that’s OK. In fact, as I’ve learned from my intergenerational discussions, it can be tremendously valuable. We just have to accept it for what it is.

Take the generation after me — that of my nieces, nephews, and my own children. Armed with determination, energy, and a belief that the world not only should be better but must be better, they are going forward trying to find the shortest point between today and the tomorrow they’re fighting for. For them, there is not a moment to lose.

And they’re right. The sooner we get there, the better it will be for all of us.

As hard as it might be for them to believe, I was once among them. I remember myself having the righteousness of youth, when what was right and what was wrong was so clearly delineated in my own head. I remember being frustrated with my own parents and grandparents, who seemed so stuck in a world no longer relevant or correct. I remember reprimanding them –seldom patiently — when they said something that was no longer acceptable in the more politically correct world of the 1980s.

But — in the blink of an eye — it’s now some 40 years later. And now, it’s my turn to be corrected.

I accept that. I’m unlearning a lot. I believe the world is a better place than the one I grew up in, so I’m willing to do to do the work necessary to change my perspective. The world is a more tolerant, fairer, more equitable place. It’s a long way from being good enough, but I do believe it’s heading in the right direction. And when I’m corrected, I know the generation that follows me is usually right. I am literally changing my mind — and that’s not easy.

But I’m also learning to value the perspective of the generation that came before me — the one I was once so quick to dismiss. I’m working to understand the environment they grew up in and the life experiences that shaped their reality. What was the context that ground the lens they see life through? If we are willing to understand that, it can teach us a lot.

Recently, I’ve been spending a lot of my time talking to a generation born during or just before WWII in Italy. Many of them came from the South of Italy. Most of them were left with nothing after the war. The lives of their parents — their possessions, their livelihood, their communities, everything they knew — was trampled underfoot as the battle spread up the boot of Italy for two long years, from July 1943 to May 1945.   When the dust and debris finally settled, they emigrated, continuing the greatest diaspora in history, because they had no other choice. You don’t leave home until there is no longer a future there to be imagined, no matter how hard you try.

Before we dismiss the perspectives that come from this generation, we have to take a long moment to appreciate the reality that formed their perspective. It is a reality that most of us have never experienced or even imagined. It is a reality that belongs not only to Italians, but almost every immigrant who left the lives they knew behind.

In my conversations with people who came from this reality, attitudes emerge that definitely don’t always fit well in today’s world. They have learned by hard experience that shit can and does happen. Their trust is hard-won. There is a suspicion of people who come from outside the circle of family and friends. There is a puzzlement with the latest cause that is burning up our social media feed. And yes, there is some cultural baggage that might best be left behind.

But there is also a backbone of courage, a long-simmering determination and a pragmatic view of the future that can be admired, and — if we take the time to listen — should be heeded. While the generation after me is rushing into their life in this world, the generation before me is limping out of it. Both perspectives are enlightening and should be considered. I am stuck in the middle. And I’m finding it’s not a bad place to be, as long as I keep looking both ways.

As any navigator can tell you, it’s much easier to pinpoint your location when you have a few different bearings available. This cross-generational view has long been embedded in Iroquois tradition, where it’s known as the Seven Generation principle: “The thickness of your skin shall be seven spans.”

The saying is commonly interpreted as looking forward to create a sustainable future for seven generations. But indigenous activist Vine Deloria Jr. had a different interpretation: that we must honor and protect the seven generations closest to us. Counting ourselves as one of those, we then look back three generations and forward three. We should make our decision based on a approximately 150-year time span, looking 75 years forward and 75 years back.

In our culture, we take a much shorter view of things. In doing that, we can often lose our bearings.

To Be There – Or Not To Be There

According to Eventbrite, hybrid events are the hottest thing for 2021. So I started thinking, what would that possibly look like, as a planner or a participant?

The interesting thing about hybrid events is that they force us to really think about how we experience things. What process do we go through when we let the outside world in? What do we lose if we do that virtually? What do we gain, if anything? And, more importantly, how do we connect with other people during those experiences?

These are questions we didn’t think much about even a year ago. But today, in a reality that’s trying to straddle both the physical and virtual worlds, they are highly relevant to how we’ll live our lives in the future.

The Italian Cooking Lesson

First, let’s try a little thought experiment.

In our town, the local Italian Club — in which both my wife and I are involved — offered cooking lessons before we were all locked down. Groups of eight to 12 people would get together with an exuberant Italian chef in a large commercial kitchen, and together they would make an authentic dish like gnocchi or ravioli. There was a little vino, a little Italian culture and a lot of laughter. These classes were a tremendous hit.

That all ended last March. But we hope to we start thinking about offering them again late in 2021 or 2022. And, if we do, would it make sense to offer them as a “hybrid” event, where you can participate in person or pick up a box of preselected ingredients and follow along in your own kitchen?

As an event organizer, this would be tempting. You can still charge the full price for physical attendance where you’re restricted to 12 people, but you could create an additional revenue stream by introducing a virtual option that could involve as many people as possible. Even at a lower registration fee, it would still dramatically increase revenue at a relatively small incremental cost. It would be “molto” profitable.

But now consider this as an attendee.Would you sign up for a virtual event like that? If you had no other option to experience it, maybe. But what if you could actually be there in person? Then what? Would you feel relegated to a second-class experience by being isolated in your own kitchen, without many of the sensory benefits that go along with the physical experience?

The Psychology of Zoom Fatigue

When I thought about our cooking lesson example, I was feeling less than enthused. And I wondered why.

It turns out that there’s some actual brain science behind my digital ennui. In an article in the Psychiatric Times, Jena Lee, MD, takes us on a “Neuropsychological Exploration of Zoom Fatigue.”

A decade ago, I was writing a lot about how we balance risk and reward. I believe that a lot of our behaviors can be explained by how we calculate the dynamic tension between those two things. It turns out that it may also be at the root of how we feel about virtual events. Dr. Lee explains,

“A core psychological component of fatigue is a rewards-costs trade-off that happens in our minds unconsciously. Basically, at every level of behavior, a trade-off is made between the likely rewards versus costs of engaging in a certain activity.”

Let’s take our Italian cooking class again. Let’s imagine we’re there in person. For our brain, this would hit all the right “reward” buttons that come with being physically “in the moment.” Subconsciously, our brains would reward us by releasing oxytocin and dopamine along with other “pleasure” neurochemicals that would make the experience highly enjoyable for us. The cost/reward calculation would be heavily weighted toward “reward.”

But that’s not the case with the virtual event. Yes, it might still be considered “rewarding,” but at an entirely different — and lesser — scale of the same “in-person” experience. In addition, we would have the additional costs of figuring out the technology required, logging into the lesson and trying to follow along. Our risk/reward calculator just might decide the tradeoffs required weren’t worth it.

Without me even knowing it, this was the calculation that was going on in my head that left me less than enthused.

 But there is a flip side to this.

Reducing the Risk Virtually

Last fall, a new study from Oracle in the U.K. was published with the headline, “82% of People Believe Robots Can Support Their Mental Health Better than Humans.”

Something about that just didn’t seem right to me. How could this be? Again, we had the choice between virtual and physical connection, and this time the odds were overwhelmingly in favor of the virtual option.

But when I thought about it in terms of risk and reward, it suddenly made sense. Talking about our own mental health is a high-risk activity. It’s sad to say, but opening up to your manager about job-related stress could get you a sympathetic ear, or it could get you fired. We are taking baby steps towards destigmatizing mental health issues, but we’re at the beginning of a very long journey.

In this case, the risk/reward calculation is flipped completely around. Virtual connections, which rely on limited bandwidth — and therefore limited vulnerability on our part — seem like a much lower risk alternative than pouring our hearts out in person. This is especially true if we can remain anonymous.

It’s All About Human Hardware

The idea of virtual/physical hybrids with expanded revenue streams will be very attractive to marketers and event organizers. There will be many jumping on this bandwagon. But, like all the new opportunities that technology brings us, it has to interface with a system that has been around for hundreds of thousands of years — otherwise known as our brain.

The Crazy World of Our Media Obsessions

Are you watching the news less? Me too. Now that the grownups are back in charge, I’m spending much less time checking my news feed.

Whatever you might say about the last four years, it certainly was good for the news business. It was one long endless loop of driving past a horrific traffic accident. Try as we might, we just couldn’t avoid looking.

But according to Internet analysis tool Alexa.com, that may be over. I ran some traffic rank reports for major news portals and they all look the same: a ramp-up over the past 90 days to the beginning of February, and then a precipitous drop off a cliff.

While all the top portals have a similar pattern, it’s most obvious on Foxnews.com.

It was as if someone said, “Show’s over folks. There’s nothing to see here. Move along.” And after we all exhaled, we did!

Not surprisingly, we watch the news more when something terrible is happening. It’s an evolved hardwired response called negativity bias.

Good news is nice. But bad news can kill you. So it’s not surprising that bad news tends to catch our attention.

But this was more than that. We were fixated by Trump. If it were just our bias toward bad news, we would still eventually get tired of it.

That’s exactly what happened with the news on COVID-19. We worked through the initial uncertainty and fear, where we were looking for more information, and at some point moved on to the subsequent psychological stages of boredom and anger. As we did that, we threw up our hands and said, “Enough already!”

But when it comes to Donald Trump, there was something else happening.

It’s been said that Trump might have been the best instinctive communicator to ever take up residence in the White House. We might not agree with what he said, but we certainly were listening.

And while we — and by we, I mean me — think we would love to put him behind us, I believe it behooves us to take a peek under the hood of this particular obsession. Because if we fell for it once, we could do it again.

How the F*$k did this guy dominate our every waking, news-consuming moment for the past four years?

We may find a clue in Bob Woodward’s book on Trump, Rage. He explains that he was looking for a “reflector” — a person who knew Trump intimately and could provide some relatively objective insight into his character.

Woodward found a rather unlikely candidate for his reflector: Trump’s son-in-law, Jared Kushner.

I know, I know — “Kushner?” Just bear with me.

In Woodward’s book, Kushner says there were four things you needed to read and “absorb” to understand how Trump’s mind works.

The first was an op-ed piece in The Wall Street Journal by Peggy Noonan called “Over Trump, We’re as Divided as Ever.” It is not complimentary to Trump. But it does begin to provide a possible answer to our ongoing fixation. Noonan explains: “He’s crazy…and it’s kind of working.”

The second was the Cheshire Cat in Alice in Wonderland. Kushner paraphrased: “If you don’t know where you’re going, any path will get you there.” In other words, in Trump’s world, it’s not direction that matters, it’s velocity.

The third was Chris Whipple’s book, The Gatekeepers: How the White House Chiefs of Staff Define Every Presidency. The insight here is that no matter how clueless Trump was about how to do his job, he still felt he knew more than his chiefs of staff.

Finally, the fourth was Win Bigly: Persuasion in a World Where Facts Don’t Matter, by Scott Adams. That’s right — Scott Adams, the same guy who created the “Dilbert” comic strip. Adams calls Trump’s approach “Intentional Wrongness Persuasion.”

Remember, this is coming from Kushner, a guy who says he worships Trump. This is not apologetic. It’s explanatory — a manual on how to communicate in today’s world. Kushner is embracing Trump’s instinctive, scorched-earth approach to keeping our attention focused on him.

It’s — as Peggy Noonan realized — leaning into the “crazy.”  

Trump represented the ultimate political tribal badge. All you needed to do was read one story on Trump, and you knew exactly where you belonged. You knew it in your core, in your bones, without any shred of ambiguity or doubt. There were few things I was as sure of in this world as where I stood on Donald J. Trump.

And maybe that was somehow satisfying to me.

There was something about standing one side or the other of the divide created by Trump that was tribal in nature.

It was probably the clearest ideological signal about what was good and what was bad that we’ve seen for some time, perhaps since World War II or the ’60s — two events that happened before most of our lifetimes.

Trump’s genius was that he somehow made both halves of the world believe they were the good guys.

In 2018, Peggy Noonan said that “Crazy won’t go the distance.” I’d like to believe that’s so, but I’m not so sure. There are certainly others that are borrowing a page from Trump’s playbook.  Right-wing Republicans Marjorie Taylor Greene and Lauren Boebert are both doing “crazy” extraordinarily well. The fact that almost none of you had to Google them to know who they are proves this.

Whether we’re loving to love, or loving to hate, we are all fixated by crazy.

The problem here is that our media ecosystem has changed. “Crazy” used to be filtered out. But somewhere along the line, news outlets discovered that “crazy” is great for their bottom lines.

As former CBS Chairman and CEO Leslie Moonves said when Trump became the Republican Presidential forerunner back in 2016, “It may not be good for America, but it’s damned good for CBS.”

Crazy draws eyeballs like, well, like crazy. It certainly generates more user views then “normal” or “competent.”

In our current media environment  — densely intertwined with the wild world of social media — we have no crazy filters. All we have now are crazy amplifiers.

And the platforms that allow this all try to crowd on the same shaky piece of moral high ground.

According to them, it’s not their job to filter out crazy. It’s anti-free speech. It’s un-American. We should be smart enough to recognize crazy when we see it.

Hmmm. Well, we know that’s not working.

Our Disappearing Attention Spans

Last week, Mediapost Editor in Chief Joe Mandese mused about our declining attention spans. He wrote, 

“while in the past, the most common addictive analogy might have been opiates — as in an insatiable desire to want more — these days [consumers] seem more like speed freaks looking for the next fix.”

Mandese cited a couple of recent studies, saying that more than half of mobile users tend to abandon any website that takes longer than three seconds to load. That

“has huge implications for the entire media ecosystem — even TV and video — because consumers increasingly are accessing all forms of content and commerce via their mobile devices.”

The question that begs to be asked here is, “Is a short attention span a bad thing?” The famous comparison is that we are now more easily distracted than a goldfish. But does a shorter attention span negatively impact us, or is it just our brain changing to be a better fit with our environment?

Academics have been debating the impact of technology on our ability to cognitively process things for some time. Journalist Nicholas Carr sounded the warning in his 2010 book, “The Shallows,” where he wrote, 

” (Our brains are) very malleable, they adapt at the cellular level to whatever we happen to be doing. And so the more time we spend surfing, and skimming, and scanning … the more adept we become at that mode of thinking.”

Certainly, Carr is right about the plasticity of our brains. It’s one of the most advantageous features about them. But is our digital environment forever pushing our brains to the shallow end of the pool? Well, it depends. Context is important. One of the biggest factors in determining how we process the information we’re seeing is the device where we’re seeing it.

Back in 2010, Microsoft did a large-scale ethnographic study on how people searched for information on different devices. The researchers found those behaviors differed greatly depending on the platform being used and the intent of the searcher. They found three main categories of search behaviors:

  • Missions are looking for one specific answer (for example, an address or phone number) and often happen on a mobile device.
  • Excavations are widespread searches that need to combine different types of information (for example, researching an upcoming trip or major purchase). They are usually launched on a desktop.
  • Finally, there are Explorations: searching for novelty, often to pass the time. These can happen on all types of devices and can often progress through different devices as the exploration evolves. The initial search may be launched on a mobile device, but as the user gets deeper into the exploration, she may switch to a desktop.

The important thing about this research was that it showed our information-seeking behaviors are very tied to intent, which in turn determines the device used. So, at a surface level, we shouldn’t be too quick to extrapolate behaviors seen on mobile devices with certain intents to other platforms or other intents. We’re very good at matching a search strategy to the strengths and weaknesses of the device we’re using.

But at a deeper level, if Carr is right (and I believe he is) about our constant split-second scanning of information to find items of interest making permanent changes in our brains, what are the implications of this?

For such a fundamentally important question, there is a small but rapidly growing body of academic research that has tried to answer it. To add to the murkiness, many of the studies done contradict each other. The best summary I could find of academia’s quest to determine if “the Internet is making us stupid” was a 2015 article in academic journal The Neuroscientist.

The authors sum up by essentially saying both “yes” — and “no.” We are getting better at quickly filtering through reams of information. We are spending fewer cognitive resource memorizing things we know we can easily find online, which theoretically leaves those resources free for other purposes. Finally, for this post, I will steer away from commenting on multitasking, because the academic jury is still very much out on that one.

But the authors also say that 

“we are shifting towards a shallow mode of learning characterized by quick scanning, reduced contemplation and memory consolidation.”

The fact is, we are spending more and more of our time scanning and clicking. There are inherent benefits to us in learning how to do that faster and more efficiently. The human brain is built to adapt and become better at the things we do all the time. But there is a price to be paid. The brain will also become less capable of doing the things we don’t do as much anymore. As the authors said, this includes actually taking the time to think.

So, in answer to the question “Is the Internet making us stupid?,” I would say no. We are just becoming smart in a different way.

But I would also say the Internet is making us less thoughtful. And that brings up a rather worrying prospect.

As I’ve said many times before, the brain thinks both fast and slow. The fast loop is brutally efficient. It is built to get stuff done in a split second, without having to think about it. Because of this, the fast loop has to be driven by what we already know or think we know. Our “fast” behaviors are necessarily bounded by the beliefs we already hold. It’s this fast loop that’s in control when we’re scanning and clicking our way through our digital environments.

But it’s the slow loop that allows us to extend our thoughts beyond our beliefs. This is where we’ll find our “open minds,” if we have such a thing. Here, we can challenge our beliefs and, if presented with enough evidence to the contrary, willingly break them down and rebuild them to update our understanding of the world. In the sense-making loop, this is called reframing.

The more time we spend “thinking fast” at the expense of “thinking slow,” the more we will become prisoners to our existing beliefs. We will be less able to consolidate and consider information that lies beyond those boundaries. We will spend more time “parsing” and less time “pondering.” As we do so, our brains will shift and change accordingly.

Ironically, our minds will change in such a way to make it exceedingly difficult to change our minds.

Happy New Year?

“Speaking of the happy new year, I wonder if any year ever had less chance of being happy. It’s as though the whole race were indulging in a kind of species introversion — as though we looked inward on our neuroses. And the thing we see isn’t very pretty… So we go into this happy new year, knowing that our species has learned nothing, can, as a race, learn nothing — that the experience of ten thousand years has made no impression on the instincts of the million years that preceded.”

That sentiment, relevant as it is to today, was not written about 2021. It was actually written 80 years ago — in 1941 — by none other than John Steinbeck.

John was feeling a little down. I’m sure we can all relate.

It’s pretty easy to say that we have hopefully put the worst year ever behind us. I don’t know about your news feed, but mine has been like a never-ending bus tour of Dante’s 7 Circles of Hell — and I’m sitting next to the life insurance salesman from Des Moines who decided to have a Caesar salad for lunch.

An online essay by Umair Haque kind of summed up 2020 for me: “The Year of the Idiot.” In it, Haque doesn’t pull any punches,

“It was the year that a pandemic searched the ocean of human stupidity, and found, to its gleeful delight, that it appeared to be bottomless. 2020 was the year that idiots wrecked our societies.”

In case you’re not catching the drift yet, Haque goes on to say, “The average person is a massive, gigantic, malicious, selfish idiot.”

Yeah. That pretty much covers it.

Or does it? Were our societies wrecked? Is the average person truly that shitty? Is the world a vast, soul sucking, rotten-cabbage-reeking dumpster fire? Or is it just the lens we’re looking through?

If you search hard enough, you can find those who are looking through a different lens — one that happens to be backed by statistical evidence rather than what bubbles to the top of our newsfeed. One of those people is Ola Rosling. He’s carrying on the mission of his late father, Hans Rosling, who was working on the book “Factfulness” when he passed away in 2017. Bill Gates called it “one of the most educational books I’ve ever read.” And Bill reads a lot of books!

Believe it or not, if you remove a global pandemic from the equation (which, admittedly, is a whole new scale of awful) the world may actually be in better shape than it was 12 months ago. And even if you throw the pandemic into the mix, there are some glimmers of silver peeking through the clouds.

Here are some things you may have missed in your news feed:

Wild polio was eradicated from Africa. That’s big news. It’s a massive achievement that had its to-do box ticked last August. And I’m betting you never heard about it.

Also, the medical and scientific world has never before mobilized and worked together on a project like the new COVID mRNA vaccines now rolling out. Again, this is a huge step forward that will have far reaching impacts on healthcare in the future. But that’s not what the news is talking about.

Here’s another thing. At long last, it looks like the world may finally be ready to start tearing apart the layers that hide systemic racism. What we’re learning is that it may not be the idiots  — and, granted, there are many, many idiots — who are the biggest problem. It may be people like me, who have unknowingly perpetuated the system and are finally beginning to see the endemic bias baked into our culture.

These are just three big steps forward that happened in 2020. There are others. We just aren’t talking about them.

We always look on the dark side. We’re a “glass half-empty” species. That’s what Rosling’s book is about: our tendency to skip over the facts to rush to the worst possible view of things. We need no help in that regard — but we get it anyway from the news business, which, run by humans and aimed at humans, amplifies our proclivity for pessimism.

I’m as glad as anyone to see 2020 in my rear-view mirror. But I am carrying something of that year forward with me: a resolution to spend more time looking for facts and relying less on media “spun” for profit to understand the state of the world.

As we consume media, we have to remember that good news is just not as profitable as bad news. We need to broaden our view to find the facts. Hans Rosling warned us, “Forming your worldview by relying on the media would be like forming your view about me by looking only at a picture of my foot.”

Yes, 2020 was bad, but it was also good. And because there are forces that swing the pendulum both ways, many of the things that were good may not have happened without the bad. In the same letter in which Steinbeck expressed his pessimism about 1941, he went on to say this:

“Not that I have lost any hope. All the goodness and the heroisms will rise up again, then be cut down again and rise up. It isn’t that the evil thing wins — it never will — but that it doesn’t die. I don’t know why we should expect it to. It seems fairly obvious that two sides of a mirror are required before one has a mirror, that two forces are necessary in man before he is man.”

There are two sides to every story, even when it’s a horror story like 2020.

What is the Moral Responsibility of a Platform?

The owner of the AirBnB home in Orinda, California suspected something was up. The woman who wanted to rent the house for Halloween night swore it wasn’t for a party. She said it was for a family reunion that had to relocate at the last minute because of the wildfire smoke coming from the Kincade fire, 85 miles north of Orinda. The owners reluctantly agreed to rent the home for one night.

Shortly after 9 pm, the neighbors called the owner, complaining of a party raging next door. The owners verified this through their doorbell camera. The police were sent. Over a 100 people who had responded to a post on social media were packed into the million-dollar home. At 10:45 pm, with no warning, things turned deadly. Gunshots were fired. Four men in their twenties were killed immediately. A 19-year-old female died the next day. Several others were injured.

Here is my question. Is AirBnB partly to blame for this?

This is a prickly question. And it’s one to extends to any one of the platforms that are highly disruptive. Technical disruption is a race against our need for order and predictability. When the status quo is upended, there is a progression towards a new civility that takes time, but technology is outstripping it. Platforms create new opportunities – for the best of us and the worst.

The simple fact is that technology always unleashes ethical ramifications – the more disruptive the technology, the more serious the ethical considerations. The other tricky bit is that some ethical considerations can be foreseen..but others cannot.

I have often said that our world is becoming a more complex place. Technology is multiplying this complexity at an ever increasing pace. And the more complex things are, the more difficult they are to predict.

As Homo Deus author Yuval Noah Harari said, because of the pace of technology, our world is becoming more complex, so it is becoming increasingly difficult to predict what the future might hold.

“Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently, we are less and less able to make sense of the present or forecast the future.”

This acceleration is also eliminating the gap between cause and consequence. We used to have the luxury of time to digest disruption. But now, the gap between the introduction of the technology and the ripples of the ramifications is shrinking.

Think about the ethical dilemmas and social implications introduced by the invention of the printing press. Thanks to the introduction of this technology, literacy started creeping down through social classes and it totally disrupted entire established hierarchies, unleashed ideological revolutions and ushered in tsunamis of social change. But the cause and consequences were separated by decades and even centuries. Should Guttenberg be held responsible for the French Revolution? This seems laughable, but only because almost three and a half centuries lie between the two.

Like the printing press eventually proved, technology typically dismantles vertical hierarchies. It democratizes capabilities – spreading them down to new users and – in the process – making the previously impossible possible. I have always said that technology is simply a tool, albeit an often disruptive one. It doesn’t change human behaviors. It enables them. But here we have an interesting phenomenon. If technology pushes capabilities down to more people and simultaneously frees those users from the restraint of a verticalized governing structure, you have a highly disruptive sociological experiment happening in real time with a vast sample of subjects.

Most things about human nature are governed by a normal distribution curve – also known as a bell curve. Behaviors expressed through new technologies are no exception. When you rapidly expand access to a capability you are going to have a spectrum of ethical attitudes interacting with it. At one end of the spectrum, you will have bad actors. You will find these actors on both sides of a market expanding at roughly the same rate as our universe. And those actors will do awful things with the technology.

Our innate sense of fairness seeks a simple line between cause and effect. If shootings happen at an AirBnB party house, then AirBnB should be held at least partly responsible. Right?

I’m not so sure. That’s the simple answer, but after giving it much thought, I don’t believe it’s the right one.  Like my previous example of the printing press, I think trying to saddle a new technology with the unintentional and unforseen social disruption unleashed by that technology is overly myopic. It’s an attitude that will halt technological progress in its tracks.

I fervently believe new technologies should be designed with humanitarian principles in mind. They should elevate humans, strive for neutrality, be impartial and foster independence. In the real world, they should do all this in a framework that allows for profitability. It is this, and only this, that is reasonable to ask from any new technology. To try to ask it to foresee every potential negative outcome or to retroactively hold it accountable when those outcomes do eventually occur is both unreasonable and unrealistic.

Disruptive technologies will always find the loopholes in our social fabric. They will make us aware of the vulnerabilities in our legislation and governance. If there is an answer to be found here, it is to be found in ourselves. We need to take accountability for the consequences of the technologies we adopt. We need to vote for governments that are committed to keeping pace with disruption through timely and effective governance.

Like it or not, the technology we have created and adopted has propelled us into a new era of complexity and unpredictability. We are flying into uncharted territory by the seat of our pants here. And before we rush to point fingers we should remember – we’re the ones that asked for it.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

The Internet: Nasty, Brutish And Short

When the internet ushered in an explosion of information in the mid to late 90s there were many — I among them — who believed humans would get smarter. What we didn’t realize then is that the opposite would eventually prove to be true.

The internet lures us into thinking with half a brain. Actually, with less than half a brain – and the half we’re using is the least thoughtful, most savage half. The culprit is the speed of connection and reaction. We are now living in a pinball culture, where the speed of play determines that we have to react by instinct. There is no time left for thoughtfulness.

Daniel Kahneman’s monumental book, “Thinking, Fast and Slow,” lays out the two loops we use for mental processing. There’s the fast loop, our instinctive response to situations, and there’s the slow loop, our thoughtful processing of reality.

Humans need both loops. This is especially true in the complexity of today’s world. The more complex our reality, the more we need the time to absorb and think about it.

 If we could only think fast, we’d all believe in capital punishment, extreme retribution and eye-for-eye retaliation. We would be disgusted and pissed off almost all the time. We would live in the Hobbesian State of Nature (from English philosopher Thomas Hobbes): The “natural condition of mankind” is what would exist if there were no government, no civilization, no laws, and no common power to restrain human nature. The state of nature is a “war of all against all,” in which human beings constantly seek to destroy each other in an incessant pursuit for power. Life in the state of nature is “nasty, brutish and short.”

That is not the world I want to live in. I want a world of compassion, empathy and respect. But the better angels of our nature rely on thoughtfulness. They take time to come to their conclusions.

With its dense interconnectedness, the internet has created a culture of immediate reaction. We react without all the facts. We are disgusted and pissed off all the time. This is the era of “cancel” and “callout” culture. The court of public opinion is now less like an actual court and more like a school of sharks in a feeding frenzy.

We seem to think this is OK because for every post we see that makes us rage inside, we also see posts that make us gush and goo. Every hateful tweet we see is leavened with a link to a video that tugs at our heartstrings. We are quick to point out that, yes, there is the bad — but there is an equal amount of good. Either can go viral. Social media simply holds up a mirror that reflects the best and worst of us.

But that’s not really true. All these posts have one thing in common: They are digested too quickly to allow for thoughtfulness. Good or bad, happy or mad — we simply react and scroll down. FOMO continues to drive us forward to the next piece of emotionally charged clickbait. 

There’s a reason why social media is so addictive: All the content is aimed directly at our “Thinking Fast” hot buttons. And evolution has reinforced those hot buttons with generous discharges of neurocchemicals that act as emotional catalysts. Our brain online is a junkie jonesing for a fix of dopamine or noradrenaline or serotonin. We get our hit and move on.

Technology is hijacking our need to pause and reflect. Marshall McLuhan was right: The medium is the message and, in this case, the medium is one that is hardwired directly to the inner demons of our humanity.It took humans over five thousand years to become civilized. Ironically, one of our greatest achievements is dissembling that civilization faster than we think. Literally.