Why Hate is Trending Up

There seems to be a lot of hate in the world lately. But hate is a hard thing to quantify. There are, however, a couple places that may put some hard numbers behind my hunch.

Google’s NGram viewer tracks the frequency of the appearance of a word through published books from 2022 all the way back to 1800. According to NGram, the usage of “hate” has skyrocketed, beginning in the mid 1980s. In 2022, the last year you can search for, the frequency of usage of “hate” was 3 times higher than it historically was.

NGram also allows you to search separately for usage in American English and British English. You’ll either be happy or dismayed to learn that hate knows no boundaries. The British hate almost as much as Americans. They had the same steep incline over the past 4 decades. However, Americans still have an edge on usage, with a frequency that is about 40% higher than those speaking the Queen’s English.

One difference between the two graphs were during the years of the First World War. Then, usage of “hate” in England spiked briefly. The U.S. didn’t have the same spike.

Another way to measure hate is provided by the Southern Poverty Law Center in Montgomery, Alabama, who have been publishing a “hate map” since 2000. The map tracks hate and antigovernment groups. In the year 2000, the first year of the map, the SPLC tracked 599 hate groups across the U.S. By 2023, the number of hate groups had exploded by 240 percent to 1430.

So – yeah – it looks like we all hate a little more than we used to. I’ve talked before about Overton’s Window, that construct that defines what it is acceptable to talk about in public. And based on both these quantitative measures, it looks like “hate” is trending up. A lot.

I’m not immune to trends. I don’t personally track such things, but I’m pretty sure the word “hate” has slipped from my lips more often in the past few years. But here’s the thing. It’s almost never used towards a person I know well. It’s certainly never used towards a person I’m in the same room with. It’s almost always used towards a faceless construct that represents a person or a group of people that I really don’t know very well. It’s not like I sit down and have a coffee with them every week. And there we have one of the common catalysts of hate – something called “dehumanization.”

Dehumanization is a mental backflip where we take a group and strip them of their human qualities, including intelligence, compassion, kindness or social awareness. We in our own “in group” make those in the “out group” less than human so it’s easier to hate them. They are “stupid”, “ignorant”, “evil” or “animals”.

But an interesting thing happens when we’re forced to sit face to face with a representative from this group and actually engage then in conversation so we can learn more about them. Suddenly, we see they’re not as stupid, evil or animalistic as we thought. Sure, we might not agree with them on everything, but we don’t hate them. And the reason for this is due to another thing that makes us human, a molecule called oxytocin.

Oxytocin has been called the “Trust molecule” by neuroeconomist Paul Zak. It kicks off a neurochemical reaction that readies our brains to be empathetic and trusting. It is part of our evolved trust sensing mechanism, orchestrating a delicate dance by our prefrontal cortex and other regions like the amygdala.

But to get the oxytocin flowing, you really need to be face-to-face with a person. You need to be communicating with your whole body, not just your eyes or ears. The way we actually communicate has been called the 7-38-55 rule, thanks to research done in the 1960’s and 70’s by UCLA body language researcher Albert Mehrabian. He showed that 7% of communication is verbal, 38% is tone of voice and 55% is through body language.

It’s that 93% of communication that is critical in the building of trust. And it can only happen face to face. Unfortunately, our society has done a dramatic about-face away from communication that happens in a shared physical space towards communication that is mediated through electronic platforms. And that started to happen about 40 years ago.

Hmmm, I wonder if there’s a connection?

The World vs Big Tech

Around the world, governments have their legislative cross hairs trained on Big Tech. It’s happening in the US, the EU and here in my country,  Canada. The majority of these are anti-trust suits. But Australia has just introduced a different type of legislation, a social media ban for those under 16. And that could change the game – and the conversation -completely for Big Tech.

There are more anti-trust actions in the queue in the US than at any time in the previous five decades. The fast and loose interpretation of antitrust enforcement in the US is that monopolies are only attacked when they may cause significant harm to customers through lack of competition. The US approach to anti-trust since the 1970s has typically followed the Chicago School of neoclassical economy theory, which places all trust in the efficiency of markets and tells government to keep their damned hands off the economy. Given this and given the pro-business slant of all US administrations, both Republican and Democratic, since Reagan, it’s not surprising that we’ve seen relatively few anti-trust suits in the past 50 years.

But the rapid rise of monolithic Big Tech platforms has raised more discussion about anti-trust in the past decade than in the previous 5 decades. These platforms suck along the industries they spawn in their wake and leave little room for upstart competitors to survive long enough to gain significant market share.

Case in point: Google. 

The recent Canadian lawsuit has the Competition Bureau (our anti-trust watchdog) suing Google for anti-competitive practices selling its online advertising services north of the 49th parallel. They’re asking Google to sell off two of its ad-tech tools, pay penalties worth up to 3% of the platform’s global gross revenues and prohibit the company from engaging in anti-competitive practices in the future.

According to a 3-year inquiry into Google’s Canadian business practices by the Bureau, Google controls 90% of all ad servers and 70% of advertising networks operating in the country. Mind you, Google started the online advertising industry in the relatively green fields of Canada back when I was still railing about the ignorance of Canadian advertisers when it came to digital marketing. No one else really had a chance. But Google made sure they never got one by wrapping its gigantic arms around the industry in an anti-competitive bear hug.

The recent Australian legislation is of a different category, however. Anti-trust suits are – by nature – not personal. They are all about business. But the Australian ban puts Big Tech in the same category as Big Tobacco, Big Alcohol and Big Pharma – alleging that they are selling an addictive product that causes physical or emotional harm to individuals. And the rest of the world is closely watching what Australia does. Canada is no exception.

The most pertinent question is how will Australia enforce the band? Restricting social media access to those under 16 is not something to be considered lightly.  It’s a huge technical, legal and logistical hurdle to get over. But if Australia can figure it out, it’s certain that other jurisdictions around the world will follow in their footsteps.

This legislation opens the door to more vigorous public discourse about the impact of social media on our society. Politicians don’t introduce legislation unless they feel that – by doing so – they will continue to get elected. And the key to being elected is one of two things; give the electorate what they want or protect them against what they fear. In Australia, recent polling indicates the ban is supported by 77% of the population. Even those opposing the ban aren’t doing so in defense of social media. They’re worried that the devil might be in the details and that the legislation is being pushed through too quickly.

These types of things tend to follow a similar narrative arc: fads and trends drive widespread adoption – evidence mounts about the negative impacts – industries either ignore or actively sabotage the sources of the evidence – and, with enough critical mass, government finally gets into the act by introducing protective legislation.

With tobacco in the US, that arc took a couple of decades, from the explosion of smoking after World War II to the U.S. Surgeon General’s 1964 report linking smoking and cancer. The first warning labels on cigarette packages appeared two years later, in 1966.

We may be on the cusp of a similar movement with social media. And, once again, it’s taken 20 years. Facebook was founded in 2004.

Time will tell. In the meantime, keep an eye on what’s happening Down Under.

Democracy Dies in the Middle

As I write this, I don’t know what the outcome of the election will be. But I do know this. There has never been an U.S. Presidential election campaign quite like this one. If you were scripting a Netflix series, you couldn’t have made up a timeline like this (and this is only a sampling):

January 26 – A jury ordered Donald Trump to pay E. Jean Carroll $83 million in additional emotional, reputation-related, and punitive damages. The original award was $5 million.

April 15 – Trial of New York vs Donald Trump begins. Trump was charged with 34 counts of felony.

May 30 – Trump is found guilty on all 34 counts in his New York trial, making him the first U.S. president to be convicted of a felony

June 27 – Biden and Trump hold their first campaign debate hosted by CNN. Biden’s performance is so bad, it’s met with calls for him to suspend his campaign

July 1 – The U.S. Supreme Court delivers a 6–3 decision in Trump v. United States, ruling that Trump had absolute immunity for acts he committed as president within his core constitutional purview.  This effectively puts further legal action against Trump on hold until after the election

July 13 – Trump is shot in the ear in an assassination attempt at a campaign rally held in Butler, Pennsylvania. One bystander and the shooter are killed and two others are injured.

July 21 – Biden announces his withdrawal from the race, necessitating the start of an “emergency transition process” for the Democratic nomination. On the same day, Kamala Harris announces her candidacy for president.

September 6 – Former vice president Dick Cheney and former Congresswoman Liz Cheney announce their endorsements for Harris. That’s the former Republican Vice President and the former chair of the House Republican Conference, endorsing a Democrat.

September 15: A shooting takes place at the Trump International Golf Club in West Palm Beach, Florida, while Donald Trump is golfing. Trump was unharmed in the incident and was evacuated by Secret Service personnel.

With all of that, it’s rather amazing that – according to a recent PEW Research Centre report – Americans don’t seem to be any more interested in the campaign than in previous election years. Numbers of people closely following election news are running about the same as in 2020 and are behind what they were in 2016.

This could be attributed in part to a certain ennui on the part of Democrats. In the spring, their level of interest was running behind Republicans. It was only when Joe Biden dropped out in July that Democrats started tuning in in greater numbers. As of September, they were following just as closely as Republicans.

I also find it interesting to see where they’re turning for their election coverage. For those 50 plus, it is overwhelmingly television. News websites and apps come in a distant second.

But for those under 50, Social Media is the preferred source, with news websites and television tied in second place. This is particularly true for those under 30, where half turn to Social media. The 30 to 49 cohort is the most media-diverse, with their sources pretty much evenly split between TV, websites and social media. 

If we look at political affiliations impacting where people turn to be informed, there was no great surprise. Democrats favour the three networks (CBS, NBC and ABC, with CNN just behind. Republicans Turn first to Fox News, then the three networks, then conservative talk radio.

The thing to note here is that Republicans tend to stick to news platforms known for having a right-wing perspective, where Democrats are more open to what could arguably be considered more objective sources.

It is interesting to note that this flips a bit with younger Republicans, who are more open to mainstream media like the three networks or papers like the New York Times. Sixty percent of Republicans aged 18 – 29 cited the three networks as a source of election information, and 45% mentioned the New York Times.

But we also have to remember that all younger people, Republican or Democrat, are more apt to rely on social media to learn about the election. And there we have a problem. Recently, George Washington University political scientist Dave Karpf was interviewed on CBC Radio about how Big Tech is influencing this election.  What was interesting about Karpf’s comments is how social media is now just as polarized as our society. X has become a cesspool of right-leaning misinformation, led by Trump supporter Elon Musk, and Facebook has tried to depoliticize their content after coming under repeated fire for influencing previous campaigns.

So, the two platforms that Karpf said were the most stabilized in past elections have effectively lost their status as common ground for messaging to the right and the left.  Karpf explains, “Part of what we’re seeing with this election cycle is a gap where nothing has really filled into those voids and left campaigns wondering what they can do. They’re trying things out on TikTok, they’re trying things out wherever they can, but we lack that stability. It is, in a sense, the first post social media election.”

This creates a troubling gap. If those under the age of 30 turn first to social media to be informed, what are they finding there? Not much, according to Karpf. And what they are finding is terribly biased, to the point of lacking any real objectivity.

In 2017, the Washington Post added this line under their masthead: “Democracy Dies in Darkness”. , in this polarized mediascape, I think it’s more accurate to say “Democracy Dies in the Middle”.  There’s a Right-Wing reality and a Left-Wing reality. The truth is somewhere in the middle. But it’s getting pretty hard to find it.

Not Everything is Political. Hurricanes, for Example.

During the two recent “once in a lifetime” hurricanes that happened to strike the southern US within two weeks of each other, people apparently thought they were a political plot and that meteorologists were in on the conspiracy,

Michigan meteorologist Katie Nickolaou received death threats through social media.

“I have had a bunch of people saying I created and steered the hurricane, there are people assuming we control the weather. I have had to point out that a hurricane has the energy of 10,000 nuclear bombs and we can’t hope to control that. But it’s taken a turn to more violent rhetoric, especially with people saying those who created Milton should be killed.”

Many weather scientists were simply stunned at the level of stupidity and misinformation hurled their way. After someone suggested that someone should “stop the breathing” of those that “made” the hurricanes, Nickolaou responded with this post, “Murdering meteorologists won’t stop hurricanes. I can’t believe I just had to type that.”

Washington, D.C. based meteorologist Matthew Cappucci also received threats: “Seemingly overnight, ideas that once would have been ridiculed as very fringe, outlandish viewpoints are suddenly becoming mainstream, and it’s making my job much more difficult.” 

Marjorie Taylor Greene, U.S. Representative for  Georgia’s 14th congressional district, jumped forcefully into the fray by suggesting the conspiracy was politically motivated.  She posted on X: “This is a map of hurricane affected areas with an overlay of electoral map by political party shows how hurricane devastation could affect the election.”

And just in case you’re giving her the benefit of the doubt by saying she might just be pointing out a correlation, not a cause, she doubled down with this post on X: “Yes they can control the weather, it’s ridiculous for anyone to lie and say it can’t be done.” 

You may say that when it comes to MTG, we must consider the source and sigh “You can’t cure stupid.”   But Marjorie Taylor Greene easily won a democratic election with almost 66% of the vote, which means the majority of people in her district believed in her enough to elect her as their representative. Her opponent, Marcus Flowers, is a 10-year veteran of the US Army and he served 20 years as a contractor or official for the State Department and Department of Defense. He’s no slouch. But in Georgia’s 14th Congressional district, two out of three voters decided a better choice would be the woman who believes that the Nazi Secret Police were called the Gazpacho.

I’ve talked about this before. Ad nauseum – actually. But this reaches a new level of stupidity…and stupidity on this scale is f*&king frightening. It is the most dangerous threat we as humans face.

That’s right, I said the “biggest” threat.  Bigger than climate change. Bigger than AI. Bigger than the new and very scary alliance emerging between Russia, Iran, North Korea and China. Bigger than the fact that Vladimir Putin, Donald Trump and Elon Musk seem to be planning a BFF pajama party in the very near future.

All of those things can be tackled if we choose to. But if we are functionally immobilized by choosing to be represented by stupidity, we are willfully ignoring our way to a point where these existential problems – and many others we’re not aware of yet – can no longer be dealt with.

Brian Cox, a professor of particle physics at the University of Manchester and host of science TV shows including Universe and The Planets, is also warning us about rampant stupidity. “We may laugh at people who think the Earth is flat or whatever, the darker side is that, if we become unmoored from fact, we have a very serious problem when we attempt to solve big challenges, such as AI regulation, climate or avoiding global war. These are things that require contact with reality.” 

At issue here is that people are choosing politics over science. And there is nothing that tethers political to reality. Politics are built on beliefs. Science strives to be built on provable facts. If we choose politics over science, we are embracing wilful ignorance. And that will kill us.

Hurricanes offer us the best possible example of why that is so. Let’s say you, along with Marjorie Taylor Greene, believe that hurricanes are created by meteorologist and mad weather scientists. So, when those nasty meteorologists try to warn you that the storm of the century is headed directly towards you, you respond in one of two ways: You don’t believe them and/or you get mad and condemn them as part of a conspiracy on social media. Neither of those things will save you. Only accepting science as a reliable prediction of the impending reality will give you the best chance of survival, because it allows you to take action.

Maybe we can’t cure stupid. But we’d better try, because it’s going to be the death of us.

Why Time Seems to Fly Faster Every Year

Last week, I got an email congratulating me on being on LinkedIn for 20 years.

My first inclination was that it couldn’t be twenty years. But when I did the mental math, I realized it was right.  I first signed up in 2004. LinkedIn had just started 2 years before, in 2002.

LinkedIn would have been my first try at a social platform. I couldn’t see the point of MySpace, which started in 2003. And I was still a couple years away from even being aware Facebook existed. It started in 2004, but it was still known as TheFacebook. It wouldn’t become open to the public until 2006, two years later, after it dropped the “The”. So, 20 years pretty much marks the sum span of my involvement with social media.

Twenty years is a significant chunk of time. Depending on your genetics, it’s probably between a quarter and a fifth of your life. A lot can happen in 20 years. But we don’t process time the same way as we get older. 20 years when you’re 18 seems like a lot bigger chunk of time than it does when you’re in your 60’s.

I always mark these things in my far-off distant youth by my grad year, which was in 1979. If I use that as the starting point, rolling back 20 years would take me all the way to 1959, a year that seemed pre-historic to me when I was a teenager. That was a time of sock hops, funny cars with tail fins, and Frankie Avalon. These things all belonged to a different world than the one I knew in 1979. Ancient Rome couldn’t have been further removed from my reality.

Yet, that same span of time lies between me and the first time I set up my profile on LinkedIn. And that just seems like yesterday to me. This all got me wondering – do we process time differently as we age? The answer, it turns out, is yes. Time is time – but the perception of time is all in our heads.

The reason why we feel time “flies” as we get older was explained in a paper published by Professor Adrian Bejan. In it, he states, “The ‘mind time’ is a sequence of images, i.e. reflections of nature that are fed by stimuli from sensory organs. The rate at which changes in mental images are perceived decreases with age, because of several physical features that change with age: saccades frequency, body size, pathways degradation, etc. “

So, it’s not that time is moving faster, it’s just that our brain is processing it slower. If our perception of time is made up of mental snapshots of what is happening around us, we simply become slower at taking the snapshots as we get older. We notice less of what’s happening around us. I suspect it’s a combination of slower brains and perhaps not wanting to embrace a changing world quite as readily as we did when we were young. Maybe we don’t notice change because we don’t want things to change.

If we were using a more objective yardstick (speaking of which, when is the last time you actually used a yardstick?), I’m guessing the world changed at least as much between 2004 and 2024 as it did between 1959 and 1979. If I were at 18 years old today, I’m guessing that Britney Spears, The Lord of the Rings and the last episode of Frasier would seem as ancient to me as a young Elvis, Ben-Hur and The Danny Thomas Show seemed to me then.

To me, all these things seem like they were just yesterday. Which is probably why it comes as a bit of a shock to see a picture of Britney Spears today. She doesn’t look like the 22-year-old we remember, which we mistakenly remember as being just a few years ago. But Britney is 42 now, and as a 42-year-old, she’s held up pretty well.

And, now that I think of it, so has LinkedIn. I still have my profile, and I still use it.

Why The World No Longer Makes Sense

Does it seem that the world no longer makes sense? That may not just be you. The world may in fact no longer be making sense.

In the late 1960s, psychologist Karl Weick introduced the world to the concept of sensemaking, but we were making sense of things long before that. It’s the mental process we go through to try to reconcile who we believe we are to the world in which we find ourselves.  It’s how we give meaning to our life.

Weick identified 7 properties critical to the process of sensemaking. I won’t mention them all, but here are three that are critical to keep in mind:

  1. Who we believe we are forms the foundation we use to make sense of the world
  2. Sensemaking needs retrospection. We need time to mull over new information we receive and form it into a narrative that makes sense to us.
  3. Sensemaking is a social activity. We look for narratives that seem plausible, and when we find them, we share them with others.

I think you see where I’m going with this. Simply put, our ability to make sense of the world is in jeopardy, both for internal and external reasons.

External to us, the quality of the narratives that are available to us to help us make sense of the world has nosedived in the past two decades. Prior to social media and the implosion of journalism, there was a baseline of objectivity in the narratives we were exposed to. One would hope that there was a kernel of truth buried somewhere in what we heard, read or saw on major news providers.

But that’s not the case today. Sensationalism has taken over journalism, driven by the need for profitability by showing ads to an increasingly polarized audience. In the process, it’s dragged the narratives we need to make sense of the world to the extremes that lie on either end of common sense.

This wouldn’t be quite as catastrophic for sensemaking if we were more skeptical. The sensemaking cycle does allow us to judge the quality of new information for ourselves, deciding whether it fits with our frame of what we believe the world to be, or if we need to update that frame. But all that validation requires time and cognitive effort. And that’s the second place where sensemaking is in jeopardy: we don’t have the time or energy to be skeptical anymore. The world moves too quickly to be mulled over.

In essence, our sensemaking is us creating a model of the world that we can use without requiring us to think too much. It’s our own proxy for reality. And, as a model, it is subject to all the limitations that come with modeling. As the British statistician George E.P. Box said, “All models are wrong, but some are useful.”

What Box didn’t say is, the more wrong our model is, the less likely it is to be useful. And that’s the looming issue with sensemaking. The model we use to determine what is real is become less and less tethered to actual reality.

It was exactly that problem that prompted Daniel Schmachtenberger and others to set up the Consilience Project. The idea of the Project is this – the more diversity in perspectives you can include in your model, the more likely the model is to be accurate. That’s what “consilience” means: pulling perspectives from different disciplines together to get a more accurate picture of complex issues.  It literally means the “jumping together” of knowledge.

The Consilience Project is trying to reverse the erosion of modern sensemaking – both from an internal and external perspective – that comes from the overt polarization and the narrowing of perspective that currently typifies the information sources we use in our own sensemaking models.  As Schmachtenberger says,  “If there are whole chunks of populations that you only have pejorative strawman versions of, where you can’t explain why they think what they think without making them dumb or bad, you should be dubious of your own modeling.”

That, in a nutshell, explains the current media landscape. No wonder nothing makes sense anymore.

My Mind is Meandering

Thirty-seven years ago, when I first drove into the valley I now call home, I said to myself, “Now, this is a place for meandering!”

Meandering is a word we don’t use enough today. We certainly don’t do the actual act of meandering enough anymore. To “meander” is to “flow in a winding course.” It comes from Maiandros, the Greek name of a river in Turkey (also known as the Büyük Menderes) known for its sinuous path. This is perhaps what brought the word to mind when I drove into Western Canada’s Okanagan Valley. This is a valley formed by water, either in flowing or frozen form.

I have always loved the word meander. Even the sound of it is like a journey; you scale the heights of the hard “e,” pausing for a minute to rest against the soft “a”, after which you descend into the lush vale that is formed by its remaining syllable. The aquatic origins of the word are appropriate, because to meander is to be in a state of flow but with no purpose in mind. Meandering allows the mind to freewheel, to pick its own path.

You know what’s another great word? Saunter.

My favorite story about sauntering is that told by Albert Palmer in his 1919 book, The Mountain Trail and Its Message. He tells of an exchange with John Muir, the founder of the Sierra Club, who was called the Father of America’s National Parks. In the exchange, Muir explains why he finds the word “saunter” far more to his taste than “hike”:

“Do you know the origin of that word ‘saunter’? It’s a beautiful word. Away back in the Middle Ages people used to go on pilgrimages to the Holy Land, and when people in the villages through which they passed asked where they were going, they would reply, “A la sainte terre,’ ‘To the Holy Land.’ And so they became known as sainte-terre-ers or saunterers. Now these mountains are our Holy Land, and we ought to saunter through them reverently, not ‘hike’ through them.”

According to Google’s Ngram viewer, literary usage of the word “saunter”  hit Its peak in the 1800s and was in decline for most of the following century. That timeline makes sense. Sauntering would definitely be popular with the Romantic movement of the late 1800s. This was a movement back to appreciate the charms of nature and would have been an open invitation to “saunter” in Muir’s “Holy Land.”

For some reason, the word seems to be enjoying a bit of a resurgence in usage in the last 20 years.

Meander is a different story. It only started to really appear in books towards the end of the 1800s and continued to be used through the 20th century, although usage dropped during times of tribulation, notably World War I, the Great Depression of the 1930s and throughout World War II. Again, that’s not surprising. It’s hard to meander when you’re in a constant state of anxiety.

As my mind meandered down this path, I wondered if there is a digital equivalent to meandering or sauntering. Take scrolling through Facebook, for example. It is navigating without any specific destination in mind, so perhaps it qualifies as meandering. There is no direct line to connect A to B.

But I wouldn’t call social media scrolling sauntering. There’s a distinction between ”meandering” and “sauntering.” I think saunter implies that you know where you’re going, but there is no rigid schedule set to get there. You can take as much time as you like to smell the flowers on your way.

Also, as John Muir mentioned, sauntering requires a certain sense of place. The setting in which you saunter is of critical importance. However you would define your own “Holy Land,” that’s the place where you should saunter. It should be grounded in some gravitas.

That’s why I don’t think you can really saunter through social media. To me, Facebook, Instagram or TikTok are a far cry from being considered hallowed ground.

Can Media Move the Overton Window?

I fear that somewhere along the line, mainstream media has forgotten its obligation to society.

It was 63 years ago, (on May 9, 1961) that new Federal Communications Commission Chair Newton Minow gave his famous speech, “Television and the Public Interest,” to the convention of the National Association of Broadcasters.

In that speech, he issued a challenge: “I invite each of you to sit down in front of your own television set when your station goes on the air and stay there, for a day, without a book, without a magazine, without a newspaper, without a profit and loss sheet or a rating book to distract you. Keep your eyes glued to that set until the station signs off. I can assure you that what you will observe is a vast wasteland.”

Minow was saying that media has an obligation to set the cultural and informational boundaries for society. The higher you set them, the more we will strive to reach them. That point was a callback to the Fairness Doctrine, established by the FCC in 1949. The policy required that “holders of broadcast licenses to present controversial issues of public importance and to do so in a manner that fairly reflected differing viewpoints.” The Fairness Doctrine was abolished by the FCC in 1987.

What Minow realized, presciently, was that mainstream media is critically important in building the frame for what would come to be called, three decades later, the Overton Window. First identified by policy analyst Joseph Overton at the Mackinaw Center for Public Policy, the term would posthumously be named after Overton by his colleague Joseph Lehman.

The term is typically used to describe the range of topics suitable for public discourse in the political arena. But, as Lehman explained in an interview, the boundaries are not set by politicians: “The most common misconception is that lawmakers themselves are in the business of shifting the Overton Window. That is absolutely false. Lawmakers are actually in the business of detecting where the window is, and then moving to be in accordance with it.

I think the concept of the Overton Window is more broadly applicable than just within politics. In almost any aspect of our society where there are ideas shaped and defined by public discourse, there is a frame that sets the boundaries for what the majority of society understands to be acceptable — and this frame is in constant motion.

Again, according to Lehman,  “It just explains how ideas come in and out of fashion, the same way that gravity explains why something falls to the earth. I can use gravity to drop an anvil on your head, but that would be wrong. I could also use gravity to throw you a life preserver; that would be good.”

Typically, the frame drifts over time to the right or left of the ideological spectrum. What came as a bit of a shock in November of 2016 was just how quickly the frame pivoted and started heading to the hard right. What was unimaginable just a few years earlier suddenly seemed open to being discussed in the public forum.

Social media was held to blame. In a New York Times op-ed written just after Trump was elected president (a result that stunned mainstream media) columnist Farhad Manjoo said,  “The election of Donald J. Trump is perhaps the starkest illustration yet that across the planet, social networks are helping to fundamentally rewire human society.”

In other words, social media can now shift the Overton Window — suddenly, and in unexpected directions. This is demonstrably true, and the nuances of this realization go far beyond the limits of this one post to discuss.

But we can’t be too quick to lay all the blame for the erratic movements of the Overton Window on social media’s doorstep.

I think social media, if anything, has expanded the window in both directions — right and left. It has redefined the concept of public discourse, moving both ends out from the middle. But it’s still the middle that determines the overall position of the window. And that middle is determined, in large part, by mainstream media.

It’s a mistake to suppose that social media has completely supplanted mainstream media. I think all of us understand that the two work together. We use what is discussed in mainstream media to get our bearings for what we discuss on social media. We may move right or left, but most of us realize there is still a boundary to what is acceptable to say.

The red flags start to go up when this goes into reverse and mainstream media starts using social media to get its bearings. If you have the mainstream chasing outliers on the right or left, you start getting some dangerous feedback loops where the Overton Window has difficulty defining its middle, risking being torn in two, with one window for the right and one for the left, each moving further and further apart.

Those who work in the media have a responsibility to society. It can’t be abdicated for the pursuit of profit or by saying they’re just following their audience. Media determines the boundaries of public discourse. It sets the tone.

Newton Minow was warning us about this six decades ago.

Uncommon Sense

Let’s talk about common sense.

“Common sense” is one of those underpinnings of democracy that we take for granted. Basically, it hinges on this concept: the majority of people will agree that certain things are true. Those things are then defined as “common sense.” And common sense becomes our reference point for what is right and what is wrong.

But what if the very concept of common sense isn’t true? That was what researchers Duncan Watts and Mark Whiting set out to explore.

Duncan Watts is one of my favourite academics. He is a computational social scientist at the University of Pennsylvania. I’m fascinated by network effects in our society, especially as they’re now impacted by social media. And that pretty much describes Watt’s academic research “wheelhouse.” 

According to his profile he’s “interested in social and organizational networks, collective dynamics of human systems, web-based experiments, and analysis of large-scale digital data, including production, consumption, and absorption of news.”

Duncan, you had me at “collective dynamics.”

 I’ve cited his work in several columns before, notably his deconstruction of marketing’s ongoing love affair with so-called influencers. A previous study from Watts shot several holes in the idea of marketing to an elite group of “influencers.”

Whiting and Watts took 50 claims that would seem to fall into the category of common sense. They ranged from the obvious (“a triangle has three sides”) to the more abstract (“all human beings are created equal”). They then recruited an online panel of participants to rate whether the claims were common sense or not. Claims based on science were more likely to be categorized as common sense. Claims about history or philosophy were less likely to be identified as common sense.

What did they find? Well, apparently common sense isn’t very common. Their report says, “we find that collective common sense is rare: at most a small fraction of people agree on more than a small fraction of claims.” Less than half of the 50 claims were identified as common sense by at least 75% of respondents.

Now, I must admit, I’m not really surprised by this. We know we are part of a pretty polarized society. It no shock that we share little in the way of ideological common ground.

But there is a fascinating potential reason why common sense is actually quite uncommon: we define common sense based on our own realities, and what is real for me may not be real for you. We determine our own realities by what we perceive to be real, and increasingly, we perceive the “real” world through a lens shaped by technology and media – both traditional and social.

Here is where common sense gets confusing. Many things – especially abstract things – have subjective reality. They are not really provable by science. Take the idea that all human beings are created equal. We may believe that, but how do we prove it? What does “equal” mean?

So when someone appeals to our common sense (usually a politician) just what are they appealing to? It’s not a universally understood fact that everyone agrees on. It’s typically a framework of belief that is probably only agreed on by a relatively small percent of the population. This really makes it a type of marketing, completely reliant on messaging and targeting the right market.

Common sense isn’t what it once was. Or perhaps it never was. Either common or sensible.

Feature image: clemsonunivlibrary

We SHOULD Know Better — But We Don’t

“The human mind is both brilliant and pathetic.  Humans have built hugely complex societies and technologies, but most of us don’t even know how a toilet works.”

– from The Knowledge Illusion: Why We Never Think Alone” by Steven Sloman and Philip Fernback.

Most of us think we know more than we do — especially about things we really know nothing about. This phenomenon is called the Dunning-Kruger Effect. Named after psychologists Justin Kruger and David Dunning, this bias causes us to overestimate our ability to do things that we’re not very good at.

That’s the basis of the new book “The Knowledge Illusion: Why We Never Think Alone.” The basic premise is this: We all think we know more than we actually do. Individually, we are all “error prone, sometimes irrational and often ignorant.” But put a bunch of us together and we can do great things. We were built to operate in groups. We are, by nature, herding animals.

This basic human nature was in the back of mind when I was listening to an interview with Es Devlin on CBC Radio. Devlin is self-described as an artist and stage designer.  She was the vision behind Beyonce’s Renaissance Tour, U2’s current run at The Sphere in Las Vegas, and the 2022 Superbowl halftime show with Dr. Dre, Snoop Dogg, Eminem and Mary J. Blige.

When it comes to designing a visually spectacular experience,  Devlin has every right to be a little cocky. But even she admits that every good idea doesn’t come directly from her. She said the following in the interview (it’s profound, so I’m quoting it at length):

“I learned quite quickly in my practice to not block other people’s ideas — to learn that, actually,  other people’s ideas are more interesting than my own, and that I will expand by absorbing someone else’s idea.

“The real test is when someone proposes something in a collaboration that you absolutely, [in] every atom of your body. revile against. They say, ‘Why don’t we do it in bubblegum pink?’ and it was the opposite of what you had in mind. It was the absolute opposite of anything you would dream of doing.

“But instead of saying, ‘Oh, we’re not doing that,’  you say ‘OK,’ and you try to imagine it. And then normally what will happen is that you can go through the veil of the pink bubblegum suggestion, and you will come out with a new thing that you would never have thought of on your own.

“Why? Because your own little batch of poems, your own little backpack of experience. does not converge with that other person, so you are properly meeting not just another human being, but everything that led up to them being in that room with you. “

From Interview with Tom Powers on Q – CBC Radio, March 18, 2024

We live in a culture that puts the individual on a pedestal.  When it comes to individualistic societies, none are more so than the United States (according to a study by Hofstede Insights).  Protection of personal rights and freedom are the cornerstone of our society (I am Canadian, but we’re not far behind on this world ranking of individualistic societies). The same is true in the U.K. (where Devlin is from), Australia, the Netherlands and New Zealand.

There are good things that come with this, but unfortunately it also sets us up as the perfect targets for the Dunning-Kruger effect. This individualism and the cognitive bias that comes with it are reinforced by social media. We all feel we have the right to be heard — and now we have the platforms that enable it.

With each post, our unshakable belief in our own genius and infallibility is bulwarked by a chorus of likes from a sycophantic choir who are jamming their fingers down on the like button. Where we should be cynical of our own intelligence and knowledge, especially about things we know nothing about, we are instead lulled into hiding behind dangerous ignorance.

What Devlin has to say is important. We need to be mindful of our own limitations and be willing to ride on the shoulders of others so we can see, know and do more. We need to peek into the backpack of others to see what they might have gathered on their own journey.

(Feature Image – Creative Commons – https://www.flickr.com/photos/tedconference/46725246075/)