The Crazy World of Our Media Obsessions

Are you watching the news less? Me too. Now that the grownups are back in charge, I’m spending much less time checking my news feed.

Whatever you might say about the last four years, it certainly was good for the news business. It was one long endless loop of driving past a horrific traffic accident. Try as we might, we just couldn’t avoid looking.

But according to Internet analysis tool Alexa.com, that may be over. I ran some traffic rank reports for major news portals and they all look the same: a ramp-up over the past 90 days to the beginning of February, and then a precipitous drop off a cliff.

While all the top portals have a similar pattern, it’s most obvious on Foxnews.com.

It was as if someone said, “Show’s over folks. There’s nothing to see here. Move along.” And after we all exhaled, we did!

Not surprisingly, we watch the news more when something terrible is happening. It’s an evolved hardwired response called negativity bias.

Good news is nice. But bad news can kill you. So it’s not surprising that bad news tends to catch our attention.

But this was more than that. We were fixated by Trump. If it were just our bias toward bad news, we would still eventually get tired of it.

That’s exactly what happened with the news on COVID-19. We worked through the initial uncertainty and fear, where we were looking for more information, and at some point moved on to the subsequent psychological stages of boredom and anger. As we did that, we threw up our hands and said, “Enough already!”

But when it comes to Donald Trump, there was something else happening.

It’s been said that Trump might have been the best instinctive communicator to ever take up residence in the White House. We might not agree with what he said, but we certainly were listening.

And while we — and by we, I mean me — think we would love to put him behind us, I believe it behooves us to take a peek under the hood of this particular obsession. Because if we fell for it once, we could do it again.

How the F*$k did this guy dominate our every waking, news-consuming moment for the past four years?

We may find a clue in Bob Woodward’s book on Trump, Rage. He explains that he was looking for a “reflector” — a person who knew Trump intimately and could provide some relatively objective insight into his character.

Woodward found a rather unlikely candidate for his reflector: Trump’s son-in-law, Jared Kushner.

I know, I know — “Kushner?” Just bear with me.

In Woodward’s book, Kushner says there were four things you needed to read and “absorb” to understand how Trump’s mind works.

The first was an op-ed piece in The Wall Street Journal by Peggy Noonan called “Over Trump, We’re as Divided as Ever.” It is not complimentary to Trump. But it does begin to provide a possible answer to our ongoing fixation. Noonan explains: “He’s crazy…and it’s kind of working.”

The second was the Cheshire Cat in Alice in Wonderland. Kushner paraphrased: “If you don’t know where you’re going, any path will get you there.” In other words, in Trump’s world, it’s not direction that matters, it’s velocity.

The third was Chris Whipple’s book, The Gatekeepers: How the White House Chiefs of Staff Define Every Presidency. The insight here is that no matter how clueless Trump was about how to do his job, he still felt he knew more than his chiefs of staff.

Finally, the fourth was Win Bigly: Persuasion in a World Where Facts Don’t Matter, by Scott Adams. That’s right — Scott Adams, the same guy who created the “Dilbert” comic strip. Adams calls Trump’s approach “Intentional Wrongness Persuasion.”

Remember, this is coming from Kushner, a guy who says he worships Trump. This is not apologetic. It’s explanatory — a manual on how to communicate in today’s world. Kushner is embracing Trump’s instinctive, scorched-earth approach to keeping our attention focused on him.

It’s — as Peggy Noonan realized — leaning into the “crazy.”  

Trump represented the ultimate political tribal badge. All you needed to do was read one story on Trump, and you knew exactly where you belonged. You knew it in your core, in your bones, without any shred of ambiguity or doubt. There were few things I was as sure of in this world as where I stood on Donald J. Trump.

And maybe that was somehow satisfying to me.

There was something about standing one side or the other of the divide created by Trump that was tribal in nature.

It was probably the clearest ideological signal about what was good and what was bad that we’ve seen for some time, perhaps since World War II or the ’60s — two events that happened before most of our lifetimes.

Trump’s genius was that he somehow made both halves of the world believe they were the good guys.

In 2018, Peggy Noonan said that “Crazy won’t go the distance.” I’d like to believe that’s so, but I’m not so sure. There are certainly others that are borrowing a page from Trump’s playbook.  Right-wing Republicans Marjorie Taylor Greene and Lauren Boebert are both doing “crazy” extraordinarily well. The fact that almost none of you had to Google them to know who they are proves this.

Whether we’re loving to love, or loving to hate, we are all fixated by crazy.

The problem here is that our media ecosystem has changed. “Crazy” used to be filtered out. But somewhere along the line, news outlets discovered that “crazy” is great for their bottom lines.

As former CBS Chairman and CEO Leslie Moonves said when Trump became the Republican Presidential forerunner back in 2016, “It may not be good for America, but it’s damned good for CBS.”

Crazy draws eyeballs like, well, like crazy. It certainly generates more user views then “normal” or “competent.”

In our current media environment  — densely intertwined with the wild world of social media — we have no crazy filters. All we have now are crazy amplifiers.

And the platforms that allow this all try to crowd on the same shaky piece of moral high ground.

According to them, it’s not their job to filter out crazy. It’s anti-free speech. It’s un-American. We should be smart enough to recognize crazy when we see it.

Hmmm. Well, we know that’s not working.

Connected Technologies are Leaving Our Seniors Behind

One of my pandemic projects has been editing a video series of oral history interviews we did with local seniors in my community. Last week, I finished the first video in the series. The original plan, pre-pandemic, was to unveil the video as a special event at a local theater, with the participants attending. Obviously, given our current reality, we had to change our plans.

We, like the rest of the world, moved our event online. As I started working through the logistics of this, I quickly realized something: Our seniors are on the other side of a wide and rapidly growing chasm. Yes, our society is digitally connected in ways we never were before, but those connections are not designed for the elderly. In fact, if you were looking for something that seems to be deliberately designed to disadvantage a segment of our population, it would be hard to find a better example than Internet connection and the elderly.

I have to admit, for much of the past year, I have been pretty focused on what I have sacrificed because of the pandemic. But I am still a pretty connected person. I can Zoom and have a virtual visit with my friends. If I wonder how my daughters are doing, I can instantly text them. If I miss their faces, I can FaceTime them. 

I have taken on the projects I’ve been able to do thanks to the privilege of being wired into the virtual world.   I can even go on a virtual bike ride with my friends through the streets of London, courtesy of Zwift.

Yes, I have given up things, but I have also been able find digital substitutes for many of those things. I’m not going to say it’s been perfect, but it’s certainly been passable.

My stepdad, who is turning 86, has been able to do none of those things. He is in a long-term care home in Alberta, Canada. His only daily social connections consist of brief interactions with staff during mealtime and when they check his blood sugar levels and give him his medication. All the activities that used to give him a chance to socialize are gone. Imagine life for him, where his sum total of connection is probably less than 30 minutes a day. And, on most days, none of that connecting is done with the people he loves.

Up until last week, family couldn’t even visit him. He was locked down due to an outbreak at his home. For my dad, there were no virtual substitutes available. He is not wired in any way for digital connection. If anyone has paid the social price of this pandemic, it’s been my dad and people like the seniors I interviewed, for whom I was desperately trying to find a way for them just to watch a 13-minute video that they had starred in.

A recent study by mobile technology manufacturer Ericsson looked specifically at the relationship between technology and seniors during the pandemic. The study focused on what the company termed the “young-old” seniors, those aged 65-74. They didn’t deal with “middle-old” (aged 75-85) or “oldest-old” (86 plus) because — well, probably because Ericsson couldn’t find enough who were connected to act as a representative sample.

But they did find that even the “young old” were falling behind in their ability to stay connected thanks to COVID-19. These are people who have owned smartphones for at least a decade, many of whom had to use computers and technology in their jobs. Up until a year ago, they were closing the technology gap with younger generations. Then, last March, they started to fall behind.

They were still using the internet, but younger people were using it even more. And, as they got older, they were finding it increasingly daunting to adopt new platforms and technology. They didn’t have the same access to “family tech support” of children or grandchildren to help get them over the learning curve. They were sticking to the things they knew how to do as the rest of the world surged forward and started living their lives in a digital landscape.

But this was not the group that was part of my video project. My experience had been with the “middle old” and “oldest old.” Half fell into the “middle old” group and half fell into the “oldest old” group. Of the eight seniors I was dealing with, only two had emails. If the “young old” are being left behind by technology, these people were never in the race to begin with. As the world was forced to reset to an online reality, these people were never given the option. They were stranded in a world suddenly disconnected from everything they knew and loved.

Predictably, the Ericsson study proposes smartphones as the solution for many of the problems of the pandemic, giving seniors more connection, more confidence and more capabilities. If only they got connected, the study says, life will be better.

But that’s not a solution with legs. It won’t go the distance. And to understand why, we just have to look at the two age cohorts the study didn’t focus on, the “middle old” and the “oldest old.”

Perhaps the hardest hit have been the “oldest old,” who have sacrificed both physical and digital connection, as this Journals of Gerontologyarticle notes.   Four from my group lived in long-term care facilities. Many of these were locked down at some point due to local outbreaks within the facility. Suddenly, that family support they required to connect with their family and friends was no longer available. The technological tools  that we take for granted — which we were able to slot in to take the place of things we were losing — were unimaginable to them. They were literally sentenced to solitary confinement.

A recent study from Germany found that only 3% of those living in long-term care facilities used an internet-connected device. A lot of the time, cognitive declines, even when they’re mild, can make trying to use technology an exercise in frustration.

When my dad went into his long-term care home, my sister and I gave him one of our old phones so he could stay connected. We set everything up and did receive a few experimental texts from him. But soon, it just became too confusing and frustrating for him to use without our constant help. He played solitaire on it for a while, then it ended up in a drawer somewhere. We didn’t push the issue. It just wasn’t the right fit.

But it’s not just my dad who struggled with technology. Even if an aging population starts out as reasonably proficient users, it can be overwhelming to keep up with new hardware, new operating systems and new security requirements. I’m not even “young old” yet, and I’ve worked with technology all my life. I owned a digital marketing company, for heaven’s sake. And even for me, it sometimes seems like a full-time job staying on top of the constant stream of updates and new things to learn and troubleshoot. As connected technology leaps forward, it does not seem unduly concerned that it’s leaving the most vulnerable segment of our population behind.

COVID-19 has pushed us into a virtual world where connection is not just a luxury, but a condition of survival. We need to connect to live. That is especially true for our seniors, who have had all the connections they relied on taken from them. We can’t leave them behind. Connected technology can no longer ignore them.

This is one gap we need to build a bridge over.

Our Disappearing Attention Spans

Last week, Mediapost Editor in Chief Joe Mandese mused about our declining attention spans. He wrote, 

“while in the past, the most common addictive analogy might have been opiates — as in an insatiable desire to want more — these days [consumers] seem more like speed freaks looking for the next fix.”

Mandese cited a couple of recent studies, saying that more than half of mobile users tend to abandon any website that takes longer than three seconds to load. That

“has huge implications for the entire media ecosystem — even TV and video — because consumers increasingly are accessing all forms of content and commerce via their mobile devices.”

The question that begs to be asked here is, “Is a short attention span a bad thing?” The famous comparison is that we are now more easily distracted than a goldfish. But does a shorter attention span negatively impact us, or is it just our brain changing to be a better fit with our environment?

Academics have been debating the impact of technology on our ability to cognitively process things for some time. Journalist Nicholas Carr sounded the warning in his 2010 book, “The Shallows,” where he wrote, 

” (Our brains are) very malleable, they adapt at the cellular level to whatever we happen to be doing. And so the more time we spend surfing, and skimming, and scanning … the more adept we become at that mode of thinking.”

Certainly, Carr is right about the plasticity of our brains. It’s one of the most advantageous features about them. But is our digital environment forever pushing our brains to the shallow end of the pool? Well, it depends. Context is important. One of the biggest factors in determining how we process the information we’re seeing is the device where we’re seeing it.

Back in 2010, Microsoft did a large-scale ethnographic study on how people searched for information on different devices. The researchers found those behaviors differed greatly depending on the platform being used and the intent of the searcher. They found three main categories of search behaviors:

  • Missions are looking for one specific answer (for example, an address or phone number) and often happen on a mobile device.
  • Excavations are widespread searches that need to combine different types of information (for example, researching an upcoming trip or major purchase). They are usually launched on a desktop.
  • Finally, there are Explorations: searching for novelty, often to pass the time. These can happen on all types of devices and can often progress through different devices as the exploration evolves. The initial search may be launched on a mobile device, but as the user gets deeper into the exploration, she may switch to a desktop.

The important thing about this research was that it showed our information-seeking behaviors are very tied to intent, which in turn determines the device used. So, at a surface level, we shouldn’t be too quick to extrapolate behaviors seen on mobile devices with certain intents to other platforms or other intents. We’re very good at matching a search strategy to the strengths and weaknesses of the device we’re using.

But at a deeper level, if Carr is right (and I believe he is) about our constant split-second scanning of information to find items of interest making permanent changes in our brains, what are the implications of this?

For such a fundamentally important question, there is a small but rapidly growing body of academic research that has tried to answer it. To add to the murkiness, many of the studies done contradict each other. The best summary I could find of academia’s quest to determine if “the Internet is making us stupid” was a 2015 article in academic journal The Neuroscientist.

The authors sum up by essentially saying both “yes” — and “no.” We are getting better at quickly filtering through reams of information. We are spending fewer cognitive resource memorizing things we know we can easily find online, which theoretically leaves those resources free for other purposes. Finally, for this post, I will steer away from commenting on multitasking, because the academic jury is still very much out on that one.

But the authors also say that 

“we are shifting towards a shallow mode of learning characterized by quick scanning, reduced contemplation and memory consolidation.”

The fact is, we are spending more and more of our time scanning and clicking. There are inherent benefits to us in learning how to do that faster and more efficiently. The human brain is built to adapt and become better at the things we do all the time. But there is a price to be paid. The brain will also become less capable of doing the things we don’t do as much anymore. As the authors said, this includes actually taking the time to think.

So, in answer to the question “Is the Internet making us stupid?,” I would say no. We are just becoming smart in a different way.

But I would also say the Internet is making us less thoughtful. And that brings up a rather worrying prospect.

As I’ve said many times before, the brain thinks both fast and slow. The fast loop is brutally efficient. It is built to get stuff done in a split second, without having to think about it. Because of this, the fast loop has to be driven by what we already know or think we know. Our “fast” behaviors are necessarily bounded by the beliefs we already hold. It’s this fast loop that’s in control when we’re scanning and clicking our way through our digital environments.

But it’s the slow loop that allows us to extend our thoughts beyond our beliefs. This is where we’ll find our “open minds,” if we have such a thing. Here, we can challenge our beliefs and, if presented with enough evidence to the contrary, willingly break them down and rebuild them to update our understanding of the world. In the sense-making loop, this is called reframing.

The more time we spend “thinking fast” at the expense of “thinking slow,” the more we will become prisoners to our existing beliefs. We will be less able to consolidate and consider information that lies beyond those boundaries. We will spend more time “parsing” and less time “pondering.” As we do so, our brains will shift and change accordingly.

Ironically, our minds will change in such a way to make it exceedingly difficult to change our minds.

Missing the Mundane

I realize something: I miss the mundane.

Somewhere along the line, mundanity got a bad rap. It became a synonym for boring. But it actually means worldly. It refers to the things you experience when you’re out in the world.

And I miss that — a lot.

There is a lot of stuff that happens when we’re living our lives that we don’t give enough credit to: Petting a dog being taken for a walk. A little flirting with another human we find attractive. Doing some people-watching while we eat our bagel in a mall’s food court. Random situational humor that plays itself out on the sidewalk in front of us. Discovering that the person cutting your hair is also a Monty Python fan. Snippets of conversation — either ones we’re participating in, or ones we overhear while we wait for the bus. Running into an old acquaintance. Even being able to smile at a stranger and have them smile back at you.

The mundane is built of all those hundreds of little, inconsequential social exchanges that happen daily in a normal world that we ordinarily wouldn’t give a second thought to.

And sometimes, serendipitously, we luck upon the holy grail of mundanity — that random “thing” that makes our day.

These are the things we live for. And now, almost all of these things have been stripped from our lives.

I didn’t realize I missed them because I never assigned any importance to them. If I did a signal-to-noise ratio analysis of my life, all these things would fall in the latter category. Most of the time, I wasn’t even fully aware that they were occurring. But I now realize when you add them all up, they’re actually a big part of what I’m missing the most. And I’ve realized that because I’ve been forced to subtract them — one by one — from my life.

I have found that the mundane isn’t boring. It’s the opposite — the seasoning that adds a little flavor to my day-to-day existence.

For the past 10 months, I thought the problem was that I was missing the big things: travel, visiting loved ones, big social gatherings. And I do miss those things. But those things are the tentpoles – the infrequent, yet consequential things that we tend to hang our happiness on. We failed to realize that in between those tentpoles, there is also the fabric of everyday life that has also been eliminated.

It’s not just that we don’t have them. It’s also that we’ve tried to substitute other things for them. And those other things may be making it worse. Things like social media and way too much time spent looking at the news. Bingeing on Netflix. Forcing ourselves into awkward online Zoom encounters just because it seems like the thing to do. A suddenly developed desire to learn Portuguese, or how to bake sourdough bread.

It’s not that all these things are bad. It’s just that they’re different from what we used to consider normal — and by doing them, it reinforces the gap that lies between then and now. They add to that gnawing discontent we have with our new forced coping mechanisms.

The mundane has always leavened our lives. But now, we’ve swapped the living of our lives for being entertained — and whether it’s the news or the new show we’re bingeing, entertainment has to be overplayed. It is nothing but peaks and valleys, with no middle ground. When we actually do the living, rather than the watching, we spend the vast majority of our time in that middle ground — the mundane, which is our emotional reprieve.

I’ve also noticed my social muscles have atrophied over the past several months due to lack of exercise. It’s been ages since I’ve had to make small talk. Every encounter now — as infrequent as they are — seems awkward. Either I’m overeager, like a puppy that’s been left alone in a house all day, or I’m just not in any mood to palaver.  

Finally, it’s these everyday mundane encounters that used to give me anecdotal evidence that not all people were awful. Every day I used to see examples of small kindnesses, unexpected generosity and just plain common courtesy. Yes, there were also counterpoints to all of these, but it almost always netted out to the good. It used to reaffirm my faith in people on a daily basis.

With that source of reaffirmation gone, I have to rely on the news and social media. And — given what those two things are — I know I will only see the extremes of human nature. It’s my “angel and asshole” theory : That we all lie on a bell curve somewhere between the two, and our current situation will push us from the center closer to those two extremes. You also know that the news and social media are going to be biased towards the “asshole” end of the spectrum.

There’s a lot to be said for the mundane — and I have. So I’ll just wrap up with my hope that my life — and yours — will become a little more mundane in the not-too-distant future.

The Timeline of Factfulness

After last Wednesday, when it seemed that our reality was splitting at the seams, I was surprised to see financial markets seemed to ignore what was happening in Washington. It racked up a 1% gain. I later learned that financial markets have a history of being rather oblivious to social upheaval.

Similarly, a newsletter I subscribe to about recent academic research was packed with recent discoveries. Not one of the 35 links in that day’s edition pointed to anything remotely relevant to what was happening at that time in Washington, D.C (or various other state capitals in the country). That was less surprising to me than the collective shrugging off of events by financial markets, but it still made an interesting contrast clear to me.

These two corners of the world are not tied to the happenings of today. Markets look forward and lay economic bets on what will be. And apparently it had bet that the events of January 6 wouldn’t have any lasting impact.

Scientific journals look backward and report on what has already happened in the world of academic research. Neither are very focused on today.

But there is another reason why these two corners of the world seemed unfazed by the news headlines of January 6th. Science and the Markets are two examples of things driven by facts and data. Yes, emotion certainly plays a part. Investors have long known that irrational exuberance or fear can drive artificial bubbles or crashes. And the choice of research paths to take is a human one, which means it’s inevitably driven by emotions.

But both these ecosystems try to systematically reduce the role of emotion as much as possible by relying on facts and data. And because facts and data do not reveal their stories immediately but rather over time in the form of trends, they have to take a longer view of the world.

Therefore, these things operate on different timelines from the news. Financial markets use what’s happening now – today – as just one of many inputs into a calculated bet that will be weeks or months in the future.

Science takes a longer view, using the challenges of today to set a research agenda that may be years away from realizing its pay-off.  Both finance and science use what’s happening right now as one input to determine what will be in the future, but neither focus exclusively on today.

In contrast, that’s exactly what the news has to do. And it hyperbolizes the now, stripping the ordinary from the extraordinary, separating it, picking it out, and concentrating it for our consumption

The fact is, both markets and science have to operate by Factfulness, to use the term coined by the late Hans Rosling, Swedish physician and well known TED speaker. To run like the rest of the world, over-focused on the amplified volatility of the here and now that fills our news feeds, would be to render them dysfunctional. They couldn’t operate. They would be in a constant state of anxiety.

Increasingly, the engines that drive our world  – such as science and financial markets – have to decouple themselves from the froth and frenzy of the immediate. They do so because the rest of the world is following a very different path – one where hyper-emotionality and polarized news outlets whip us back and forth like a rag-doll caught by a Doberman.

This decoupling has accelerated thanks to the role of technology in compressing the timelines of the worldview of most of us. We are instantly alerted to what’s happening now and are then ushered into a highly biased bubble from in which we look at the world. Our world view is not only formed by emotion, it is deliberately designed to manipulate those emotions.

Emotions are our instant response to the world. They run fully hot or cold, with nary a nuance of ration to modulate them. Also, because emotions are our natural early warning systems, we tend to be hyper-aware of them and are immediately drawn to anything that promises to push our emotional buttons. As such, they are a notoriously inaccurate lens from which to look at reality.  That is why efforts are made to minimize their impact in the worlds of science and finance.

We should hold other critical systems to the same standards. Take government, for instance. Now, more than ever, we need those that govern us to be clear eyed and dealing with facts. Unfortunately, as we saw last week, they’re running as fast as they can in the opposite direction.

Happy New Year?

“Speaking of the happy new year, I wonder if any year ever had less chance of being happy. It’s as though the whole race were indulging in a kind of species introversion — as though we looked inward on our neuroses. And the thing we see isn’t very pretty… So we go into this happy new year, knowing that our species has learned nothing, can, as a race, learn nothing — that the experience of ten thousand years has made no impression on the instincts of the million years that preceded.”

That sentiment, relevant as it is to today, was not written about 2021. It was actually written 80 years ago — in 1941 — by none other than John Steinbeck.

John was feeling a little down. I’m sure we can all relate.

It’s pretty easy to say that we have hopefully put the worst year ever behind us. I don’t know about your news feed, but mine has been like a never-ending bus tour of Dante’s 7 Circles of Hell — and I’m sitting next to the life insurance salesman from Des Moines who decided to have a Caesar salad for lunch.

An online essay by Umair Haque kind of summed up 2020 for me: “The Year of the Idiot.” In it, Haque doesn’t pull any punches,

“It was the year that a pandemic searched the ocean of human stupidity, and found, to its gleeful delight, that it appeared to be bottomless. 2020 was the year that idiots wrecked our societies.”

In case you’re not catching the drift yet, Haque goes on to say, “The average person is a massive, gigantic, malicious, selfish idiot.”

Yeah. That pretty much covers it.

Or does it? Were our societies wrecked? Is the average person truly that shitty? Is the world a vast, soul sucking, rotten-cabbage-reeking dumpster fire? Or is it just the lens we’re looking through?

If you search hard enough, you can find those who are looking through a different lens — one that happens to be backed by statistical evidence rather than what bubbles to the top of our newsfeed. One of those people is Ola Rosling. He’s carrying on the mission of his late father, Hans Rosling, who was working on the book “Factfulness” when he passed away in 2017. Bill Gates called it “one of the most educational books I’ve ever read.” And Bill reads a lot of books!

Believe it or not, if you remove a global pandemic from the equation (which, admittedly, is a whole new scale of awful) the world may actually be in better shape than it was 12 months ago. And even if you throw the pandemic into the mix, there are some glimmers of silver peeking through the clouds.

Here are some things you may have missed in your news feed:

Wild polio was eradicated from Africa. That’s big news. It’s a massive achievement that had its to-do box ticked last August. And I’m betting you never heard about it.

Also, the medical and scientific world has never before mobilized and worked together on a project like the new COVID mRNA vaccines now rolling out. Again, this is a huge step forward that will have far reaching impacts on healthcare in the future. But that’s not what the news is talking about.

Here’s another thing. At long last, it looks like the world may finally be ready to start tearing apart the layers that hide systemic racism. What we’re learning is that it may not be the idiots  — and, granted, there are many, many idiots — who are the biggest problem. It may be people like me, who have unknowingly perpetuated the system and are finally beginning to see the endemic bias baked into our culture.

These are just three big steps forward that happened in 2020. There are others. We just aren’t talking about them.

We always look on the dark side. We’re a “glass half-empty” species. That’s what Rosling’s book is about: our tendency to skip over the facts to rush to the worst possible view of things. We need no help in that regard — but we get it anyway from the news business, which, run by humans and aimed at humans, amplifies our proclivity for pessimism.

I’m as glad as anyone to see 2020 in my rear-view mirror. But I am carrying something of that year forward with me: a resolution to spend more time looking for facts and relying less on media “spun” for profit to understand the state of the world.

As we consume media, we have to remember that good news is just not as profitable as bad news. We need to broaden our view to find the facts. Hans Rosling warned us, “Forming your worldview by relying on the media would be like forming your view about me by looking only at a picture of my foot.”

Yes, 2020 was bad, but it was also good. And because there are forces that swing the pendulum both ways, many of the things that were good may not have happened without the bad. In the same letter in which Steinbeck expressed his pessimism about 1941, he went on to say this:

“Not that I have lost any hope. All the goodness and the heroisms will rise up again, then be cut down again and rise up. It isn’t that the evil thing wins — it never will — but that it doesn’t die. I don’t know why we should expect it to. It seems fairly obvious that two sides of a mirror are required before one has a mirror, that two forces are necessary in man before he is man.”

There are two sides to every story, even when it’s a horror story like 2020.

Second Thoughts about the Social Dilemma

Watched “The Social Dilemma” yet? I did, a few months ago. The Netflix documentary sets off all kinds of alarms about social media and how it’s twisting the very fabric of our society. It’s a mix of standard documentary fodder — a lot of tell-all interviews with industry insiders and activists — with an (ill-advised, in my opinion) dramatization of the effects of social media addiction in one particular family.

The one most affected is a male teenager who is suddenly drawn, zombie-like,by his social media feed into an ultra-polarized political activist group. Behind the scenes, operating in a sort of evil-empire control room setting, there are literally puppet masters pulling his strings.

It’s scary as hell. But should we be scared? Or — at least — should we be that scared?

Many of us are sounding alarms about social media and how it nets out to be a bad thing. I’m one of the worst. I am very concerned about the impact of social media, and I’ve said so many, many times in this column. But I also admit that this is a social experiment playing out in real time, so it’s hard to predict what the outcome will be. We should keep our minds open to new evidence.

I’ve also said that younger generations seem to be handling this in stride. At least, they seem to be handling it better than those in my generation. They’re quicker to adapt and to use new technologies natively to function in their environments, rather than fumble as we do, searching for some corollary to the world we grew up in.

I’ve certainly had pushback on this observation. Maybe I’m wrong. Or maybe, like so many seemingly disastrous new technological trends before it, social media may turn out to be neither bad nor good. It may just be different.

That certainly seems to be the case if you read a new study from the Institute for Family Studies at Brigham Young University’s Wheatley Institution.  

One of the lead authors of the study, Jean Twenge, previously rang the alarm bells about how technology was short-circuiting the mental wiring of our youth. In a 2017 article in The Atlantic titled “Have Smartphones Destroyed a Generation?” she made this claim:

“It’s not an exaggeration to describe iGen as being on the brink of the worst mental-health crisis in decades. Much of this deterioration can be traced to their phones.”

The article describes a generation of zombies mentally hardwired to social media through their addiction to their iPhone. One of the more startling claims was this:

“Teens who spend three hours a day or more on electronic devices are 35 percent more likely to have a risk factor for suicide, such as making a suicide plan.”

Again, scary as hell, right? This sounds frighteningly familiar to the scenarios laid out in ““The Social Dilemma.”  

But what if you take this same group and this same author, fast-forward three years to the middle of the worst pandemic in our lifetimes, and check in with over 1,500 teens to see how they’re doing in a time where they have every right to be depressed? Not only are they locked inside, they’re also processing societal upheavals and existential threats like systemic racial inequality, alt-right political populism and climate change. If 2017-2018 was scary for them, 2020 is a dumpster fire.

Surprisingly, those same teens appear to be doing better than they were two years ago. The study had four measures of ill-being: loneliness, life dissatisfaction, unhappiness and depression.  The results were counterintuitive, to say the least. The number of teens indicating they were depressed actually dropped substantially, from 27% in 2018 to 17% who were quarantined during the school year in 2020. Fewer said they were lonely as well. 

The study indicated that the reasons for this could be because teens were getting more sleep and were spending more time with family.

But what about smartphones and social media? Wouldn’t a quarantined teen (a Quaran-teen?) be spending even more time on his or her phone and social media? 

Well, yes – and no. The study found screen time didn’t really go up, but the way that time was spent did shift. Surprising, time spent on social media went down, but time spent on video chats with friends or watching online streaming entertainment went up. 

As I shared in my column a few weeks ago, this again indicates that it’s not how much time we spend on social media that determines our mental state. It’s how we spend that time. If we spend it looking for connection, rather than obsessing over social status, it can be a good thing. 

Another study, from the University of Oxford, examined data on more than 300,000 adolescents and found that increased screen time has no more impact on teenager’s mental health than eating more potatoes. Or wearing glasses.

If you’re really worried about your teen’s mental health, make sure they have breakfast. Or get enough sleep. Or just spend more time with them. All those things are going to have a lot more impact than the time they spend on their phone.

To be clear, this is not me becoming a fan of Facebook or social media in general. There are still many things to be concerned about. But let’s also realize that technology — any technology– is a tool. It is not inherently good or evil. Those qualities can be found in how we choose to use technology. 

Have More People Become More Awful?

Is it just me, or do people seem a little more awful lately? There seems to be a little more ignorance in the world, a little less compassion, a little more bullying and a lot less courtesy.

Maybe it’s just me.

It’s been a while since I’ve checked in with eternal optimist Steven Pinker.  The Harvard psychologist is probably the best-known proponent of the argument that the world is consistently trending towards being a better place.  According to Pinker, we are less bigoted, less homophobic, less misogynist and less violent. At least, that’s what he felt pre-COVID lockdown. As I said, I haven’t checked in with him lately, but I suspect he would say the long-term trends haven’t appreciably changed. Maybe we’re just going through a blip.

Why, then, does the world seem to be going to hell in a hand cart?  Why do people — at least some people — seem so awful?

I think it’s important to remember that our brain likes to play tricks on us. It’s in a never-ending quest to connect cause and effect. Sometimes, to do so, the brain jumps to conclusions. Unfortunately, it is aided in this unfortunate tendency by a couple of accomplices — namely news reporting and social media. Even if the world isn’t getting shittier, it certainly seems to be. 

Let me give you one example. In my local town, an anti-masking rally was recently held at a nearby shopping mall. Local news outlets jumped on it, with pictures and video of non-masked, non-socially distanced protesters carrying signs and chanting about our decline into Communism and how their rights were being violated.

What a bunch of boneheads — right? That was certainly the consensus in my social media circle. How could people care so little about the health and safety of their community? Why are they so awful?

But when you take the time to unpack this a bit, you realize that everyone is probably overplaying their hands. I don’t have exact numbers, but I don’t think there were more than 30 or 40 protestors at the rally. The population of my city is about 150,000. These protestors represented .03% of the total population. 

Let’s say for every person at the rally, there were 10 that felt the same way but weren’t there. That’s still less than 1%. Even if you multiplied the number of protesters by 100, it would still be just 3% of my community. We’re still talking about a tiny fraction of all the people who live in my city. 

But both the news media and my social media feed have ensured that these people are highly visible. And because they are, our brain likes to use that small and very visible sample and extrapolate it to the world in general. It’s called availability bias, a cognitive shortcut where the brain uses whatever’s easy to grab to create our understanding of the world.

But availability bias is nothing new. Our brains have always done this. So, what’s different about now?

Here, we have to understand that the current reality may be leading us into another “mind-trap.” A 2018 study from Harvard introduced something called “prevalence-induced concept change,” which gives us a better understanding of how the brain focuses on signals in a field of noise. 

Basically, when signals of bad things become less common, the brain works harder to find them. We expand our definition of what is “bad” to include more examples so we can feel more successful in finding them.

I’m probably stretching beyond the limits of the original study here, but could this same thing be happening now? Are we all super-attuned to any hint of what we see as antisocial behavior so we can jump on it? 

If this is the case, again social media is largely to blame. It’s another example of our current toxic mix of dog whistlecancel culturevirtue signaling, pseudo-reality that is being driven by social media. 

That’s two possible things that are happening. But if we add one more, it becomes a perfect storm of perceived awfulness. 

In a normal world, we all have different definitions of the ethical signals we’re paying attention to. What you are focused on right now in your balancing of what is right and wrong is probably different from what I’m currently focused on. I may be thinking about gun control while you’re thinking about reducing your carbon footprint.

But now, we’re all thinking about the same thing: surviving a pandemic. And this isn’t just some theoretical mind exercise. This is something that surrounds us, affecting us every single day. When it comes to this topic, our nerves have been rubbed raw and our patience has run out. 

Worst of all, we feel helpless. There seems to be nothing we can do to edge the world toward being a less awful place. Behaviors that in another reality and on another topic would have never crossed our radar now have us enraged. And, when we’re enraged, we do the one thing we can do: We share our rage on social media. Unfortunately, by doing so, we’re not part of the solution. We are just pouring fuel on the fire.

Yes, some people probably are awful. But are they more awful than they were this time last year? I don’t think so. I also can’t believe that the essential moral balance of our society has collectively nosedived in the last several months. 

What I do believe is that we are living in a time where we’re facing new challenges in how we perceive the world. Now, more than ever before, we’re on the lookout for what we believe to be awful. And if we’re looking for it, we’re sure to find it.

You Said, ‘Why Public Broadcasting?’ I Still Say, ‘Why Not?’

It appears my column a few weeks ago on public broadcasting hit a few raw nerves. Despite my trying to stickhandle around the emotionally charged use of the word “socialism” there were a few comments saying, in essence, why should taxpayers have to support broadcasting when there were private and corporate donors willing to do so? Why would we follow a socialist approach to ensuring fair and responsible journalism? We are the land of the free and open market. Let’s just let it do its job.

One commenter suggested that if people want to support responsible journalism, let them become subscribers. Make it a Netflix-based model for journalism. That is one solution put forward in my friend John Marshall’s  new book, “Free is Bad.”

It’s not wrong. It’s certainly one approach. I would encourage everyone to subscribe to at least one news publication that still practices real journalism.

Another commenter suggested that as long as there are donors who believe in journalism and are willing to put their money where their mouth is, we can let them carry the load. That’s another approach. 

Case in point, ProPublica. 

ProPublica is a nonprofit newsroom funded by donations. The quality of its reporting has garnered it six Pulitzers, five Peabodys, three Emmys and a number of other awards. It can certainly be pointed to as a great example of high-quality reporting that doesn’t rely on advertising dollars. But ProPublica has been around since 2008 and it only has a little over 100 journalists on the payroll. I’m sure its principals would love to hire more. They just don’t have enough money. 

The problem here — the one that prompted my suggestion to consider public broadcasting as an alternative — is that both subscriber and donor-based approaches are like trying to kill the elephant in the room with a flyswatter. The economics are hopelessly imbalanced and just can’t work.

Journalism is in full-scale attrition because its revenue model is irretrievably broken. Here’s why it’s broken: The usual winner in competitions based on capitalism is what’s most popular, not what’s the best. It’s a race to the shallow end of the pool.

And that’s what’s happened to real news reporting. Staying shallow in an advertising-supported marketplace is the best way to ensure profitability. 

But even the shallow end needs some water; there needs to be some news to act as the raw material for opinion and analysis content. In the news business, that water is the overflow from the deep end. And someone — somewhere — has to keep refilling the deep end.

In a market that is determined to cling to free-market capitalism, no one is willing to invest in the type of journalism required to keep the deep end full. It’s the Tragedy of the Commons, applied to journalism. There are too many taking, and no one is giving back. Incentives and required outcomes are not only not aligned, they are pointed in opposite directions. 

But, as my commenters noted, that is where subscriptions and donations can come in. Obviously, a subscriber-based model has worked very well for streaming services like Netflix. Why couldn’t the same be true for journalism? 

I don’t believe the same approach will work, for a few reasons. 

First, Netflix has the advantage of exclusivity. You have to subscribe to access their content. Journalism doesn’t work that way. Once a news story has broken, there is a whole downstream gaggle of news channels that will jump on it and endlessly spin and respin it with their own analysis and commentary.  

This respun content will always be more popular that the original story, because it’s been predigested to align with the target audience’s own beliefs and perspectives. As I’ve said before, when it comes to news, we have a junk food habit. And why would you buy broccoli when you can get a cheeseburger for free?

This exclusivity also gives Netflix the ability to program both for quality and popularity. For every “Queen’s Gambit,” there are dozens of “Tiger King’s” and other brain-food junk snacks. When all the money is being dumped into the same pool, it can fill both the shallow and deep ends at the same time.

But perhaps the biggest misconception about Netflix’s success is that it’s not determined if Netflix is, in fact, successful. It is still a model in transition and is still relying heavily on licensed content to prop up the profitability of its original programming. When it comes to successfully transitioning the majority of viewer streams to its own programming, the jury is still very much out, as this analysis notes.

There are more reasons why I don’t think a subscription model is the best answer to journalism attrition, but we’ll leave it there for now. 

But what about donor-based journalism, like that found on PBS affiliates or ProPublica? While I don’t doubt their intentions or the quality of the reporting, I do have issues with the scale. There are simply not enough donor dollars flowing into these organizations to fund the type of expensive journalism that we need. 

And these donor dollars are largely missing in local markets, where the attrition of true news reporting is progressing at an even faster rate. In the big picture — and to return to our previous analogy — this represents a mere trickle into the deep end. 

There are just some things that shouldn’t exist in a for-profit setting. The dynamics of capitalism and how it aligns incentive just don’t work for these examples. These things are almost always social obligations that we must have but that require a commitment that usually represents personal sacrifice. 

This is the basis of a social democracy where personal sacrifice is typically exacted through taxation. While you may not like it, taxation is still the best way we’ve found to prevent the Tragedy of the Commons. 

We are now to the point where access to true and reliable information has become a social obligation. And much as we may not like it, we all need to sacrifice a little bit to make sure we don’t lose it forever.

Friendship: Uncoupled

This probably won’t come as a shock to anyone reading this: A recent study says that it’s not if you use social media that determines your happiness, but how you use social media. 

Derrick Wirtz, an associate professor of teaching in psychology at the University of British Columbia-Okanagan, took a close look at how people use three major social platforms—Facebook, Twitter and Instagram—and if how you use it can make you happier or sadder.

As I said, most of you probably said to yourself, “Yeah, that checks out.” But this study does bring up an interesting nuance with some far-reaching implications. 

In today’s world, we’re increasingly using Facebook to maintain our social connections. And, according to Facebook’s mission statement, that’s exactly what’s supposed to happen: “People use Facebook to stay connected with friends and family, to discover what’s going on in the world, and to share and express what matters to them.”

The interesting thing in this study is the divide between our social activities — those aimed at bonding versus those aimed at gaining status — and how that impacts our moods and behaviors. It’s difficult to untangle the effect of those two factors, because they are so intertwined in our psyches. But according to this study, Dr. Wirtz found that some of us are spending far more time on social media “status-checking” then actually tending to our friendships.

“Passive use, scrolling through others’ posts and updates, involves little person-to-person reciprocal interaction while providing ample opportunity for upward comparison,” says Wirtz. 

We can scroll our newsfeed without any actual form of engagement — but that’s not what we were designed to do. Our social skills evolved to develop essential mutually beneficial bonds in a small group setting.

Friendship is meant to be nurtured and tended to organically and intimately in a face-to-face environment.  But the distal nature of social media is changing the dynamics of how we maintain relationships in our network. 

Take how we first establish friendships, for instance. When you meet someone for the very first time, how do you decide whether you’re going to become friendly or not? The answer, not surprisingly, is complex and nuanced. Our brain works overtime to determine whether we should bond or not. But, also not surprisingly, almost none of that work is based on rational thought.

UCLA psychologist Dr. Elizabeth Laugeson teaches young adults with social challenges, such as those on the autism spectrum, how to take those very first steps toward friendship when meeting a stranger. If you can’t pick up the face-to-face nuances of body language and unspoken social cues intuitively, becoming friends can be incredibly difficult. Essentially, we are constantly scanning the other person for small signs of common interest from which we can start working toward building trust. 

Even if you clear this first hurdle, it’s not easy to build an actual friendship. It requires a massive investment of our time and energy. A recent study from the University of Kansas found it takes about 50 hours of socializing just to go from acquaintance to casual friend. 

Want to make a “real” friend? Tack another 40 hours onto that. And if you goal is to become a “close” friend, you’d better be prepared to invest at least a total of 200 hours. 

So that begs the question, why would we make this investment in the first place? Why do we need friends? And why do we need at least a handful of really close friends? The answer lies in the concept of reciprocity. 

From an evolutionary perspective, having friends made it easier to survive and reproduce. We didn’t have to go it alone. We could help each other past the rough spots, even if we weren’t related to each other. Having friends stacked the odds in our favor. 

This is when our investment in all those hours of building friendships paid off. Again, this takes us back to the intimate and organic roots of friendship. 

Our brains, in turn, reinforced this behavior by making sure that having friends made us happy. 

Of course, like most human behaviors, it’s not nearly that simple or benign. Our brains also entwine the benefits of friendship with the specter of social status, making everything much more complicated. 

Status also confers an evolutionary advantage. For many generations, we have trod this fine line between being a true friend and being obsessed with our own status in the groups where we hang out.

And then came social media.

As Wirtz’s study shows, we now have this dangerous uncoupling between these two sides of our nature. With social media, friendship is now many steps removed from its physical, intimate and organic roots. It is stripped of the context in which it evolved. And, it appears, the intertwined strands of friendship and social status are unraveling. When this happens, time on social media can reap the anxiety and jealousy of status-checking without any of the joy that comes from connecting with and helping a friend. 

On a person-to-person basis, this uncoupling can be disturbing and unfortunate. But consider what may happen when these same tendencies are amplified and magnified through a massive, culture-wide network.