Being in the Room Where It Happens

I spent the past weekend attending a conference that I had helped to plan. As is now often the case, this was a hybrid conference; you could choose to attend in person or online via Zoom. Although it involved a long plane ride, I choose to attend in person. It could be because – as a planner – I wanted to see how the event played out. Also, it’s been a long time since I attended a conference away from my home. Or – maybe – it was just FOMO.

Whatever the reason, I’m glad I was there, in the room.

This was a very small conference planned on a shoestring budget. We didn’t have money for extensive IT support or AV equipment. We were dependent solely on a laptop and whatever sound equipment our host was able to supply. We knew going into the conference that this would make for a less-than-ideal experience for those attending virtually. But – even accounting for that – I found there was a huge gap in the quality of that experience between those that were there and those that were attending online. And, over the duration of the 3-day conference, I observed why that might be so.

This conference was a 50/50 mix of those that already knew each other and those that were meeting each other for the first time. Even those who were familiar with each other tended to connect more often via a virtual meeting platform than in a physical meeting space. I know that despite the convenience and efficiency of being able to meet online, something is lost in the process. After the past two days, carefully observing what was happening in the room we were all in, I have a better understanding of what that loss might be – it was the vague and inexact art of creating a real bond with another person.

In that room, the bonding didn’t happen at the speaking podium and very seldom happened during the sessions we so carefully planned. It seeped in on the sidelines, over warmed-over coffee from conference centre urns, overripe bananas and the detritus of the picked over pastry tray. The bonding came from all of us sharing and digesting a common experience. You could feel a palpable energy in the room. You could pick up the emotion, read the body language and tune in to the full bandwidth of communication that goes far beyond what could be transmitted between an onboard microphone and a webcam.

But it wasn’t just the sharing of the experience that created the bonds. It was the digesting of those experiences after the fact. We humans are herding animals, and that extends to how we come to consensus about things we go through together. We do so through communication with others – not just with words and gesture, but also through the full bandwidth of our evolved mechanisms for coming to a collective understanding. It wasn’t just that a camera and microphone couldn’t transmit that effectively, it was that it happened where there was no camera or mic.

As researchers have discovered, there is a lived reality and a remembered reality and often, they don’t look very much alike. The difference between the effectiveness of an in-person experience and one accessed through an online platform shouldn’t come as a surprise to us. This is due to how our evolved sense-making mechanisms operate. We make sense of reality both internally, through a comparison with our existing cognitive models and externally, through interacting with others around us who have shared that same reality. This communal give-and-take colors what we take with us, in the form of both memories and an updated model of what we know and believe. When it comes to how humans are built, collective sense making is a feature, not a bug.

I came away from that conference with much more than the content that was shared at the speaker dais. I also came away with a handful of new relationships, built on sharing an experience and, through that, laying down the first foundations of trust and familiarity. I would not hesitate to reach out to any of these new friends if I had a question about something or a project I felt they could collaborate on.

I think that’s true largely because I was in the room where it happened.

Face Time in the Real World is Important

For all the advances made in neuroscience, we still don’t fully understand how our brains respond to other people. What we do know is that it’s complex.

Join the Chorus

Recent studies, including this one from Rochester University, are showing that when we see someone we recognize, the brain responds with a chorus of neuronal activity. Neurons from different parts of the brain fire in unison, creating a congruent response that may simultaneously pull from memory, from emotion, from the rational regions of our prefrontal cortex and from other deep-seated areas of our brain. The firing of any one neuron may be relatively subtle, but together this chorus of neurons can create a powerful response to a person. This cognitive choir represents our total comprehension of an individual.

Non-Verbal Communication

“You’ll have your looks, your pretty face. – And don’t underestimate the importance of body language!” – Ursula, The Little Mermaid

Given that we respond to people with different parts of the brain, it makes sense that we use part of the brain we didn’t realize when communicating with someone else. In 1967, psychologist Albert Mehrabian attempted to pin this down with some actual numbers, publishing a paper in which he put forth what became known as Mehrabian’s Rule: 7% of communication is verbal, 38% is tone of voice and 55% is body language.

Like many oft-quoted rules, this one is typically mis-quoted. It’s not that words are not important when we communication something. Words convey the message. But it’s the non-verbal part that determines how we interpret the message – and whether we trust it or not.

Folk wisdom has told us, “Your mouth is telling me one thing, but your eyes are telling me another.” In this case, folk wisdom is right. We evolved to respond to another person with our whole bodies, with our brains playing the part of conductor. Maybe the numbers don’t exactly add up to Mehrabian’s neat and tidy ratio, but the importance of non-verbal communication is undeniable. We intuitively pick up incredibly subtle hints: a slight tremor in the voice, a bead of sweat on the forehead, a slight turn down of one corner of the mouth, perhaps a foot tapping or a finger trembling, a split-second darting of the eye. All this is subconsciously monitored, fed to the brain and orchestrated into a judgment about a person and what they’re trying to tell us. This is how we evolved to judge whether we should build trust or lose it.

Face to Face vs Face to Screen

Now, we get to the question you knew was coming, “What happens when we have to make these decisions about someone else through a screen rather than face to face?”

Given that we don’t fully understand how the brain responds to people yet, it’s hard to say how much of our ability to judge whether we should convey trust or withhold it is impaired by screen-to-screen communication. My guess is that the impairment is significant, probably well over 50%. It’s difficult to test this in a laboratory setting, given that it generally requires some type of neuroimaging, such as an fMRI scanner. In order to present a stimulus for the brain to respond to when the subject is strapped in, a screen is really the only option. But common sense tells me – given the sophisticated and orchestrated nature of our brain’s social responses – that a lot is lost in translation from a real-world encounter to a screen recording.

New Faces vs Old Ones

If we think of how our brains respond to faces, we realize that in today’s world, a lot of our social judgements are increasing made without face-to-face encounters. In a case where we know someone, we will pull forward a snapshot of our entire history with that person. The current communication is just another data point in a rich collection of interpersonal experience. One would think that would substantially increase our odds of making a valid judgement.

But what if we must make a judgement on someone we’ve never met before, and have only seen through a screen; be it a TikTok post, an Instagram Reel, a YouTube video or a Facebook Post? What if we have to decide whether to believe an influencer when making an important life decision? Are we willing to rely on a fraction of our brain’s capacity when deciding whether to place trust in someone we’ve never met?

Media Modelling of Masculinity       

According to a study that was just released by the Movember Institute of Men’s Health, nearly two-thirds of 3000 young men surveyed in the US, the UK and Australia were regularly engaging with online masculinity influencers. They looked to them for inspiration on how to be fitter, more financially successful and how to increase the quantity and/or quality of their relationships.

Did they find what they were looking for?

It’s hard to say based on the survey results. While these young men said they found these influencers inspiring and were optimistic about their personal circumstances and the future social circumstances of men in general, they said some troubling things about their own mental health. They were less willing to prioritize mental health and were more likely to engage in risky health behaviors such as steroid use or ignoring their own bodies and pushing themselves to exercise too hard. These mixed signals seemed to come from influencers telling them that a man who can’t control is emotions is weak and is not a real man.

Also, not all the harm inflicted by these influencers was felt just by the men in their audience. Those in the study who followed influencers were more likely to report negative and limiting attitudes towards women and what they bring to a relationship. They felt that often women were being rude to them and that they didn’t have the same dating values as men.

Finally, men who followed influencers were almost twice as likely to value traits in their male friends such as ambition, popularity and wealth. They were less likely to look for trustworthiness or kindness in their male friends.

This brings us to a question. Why do young men need influencers to tell them how to be a better man? For that matter, why do any of us, regardless of age or sex, need someone to influence us? Especially if it’s someone who’s only qualification to dispense advice is that they happen to have a TikTok account with a million followers.

This is another unfortunate effect of social media. We have evolved to look for role models because to do so gives us a step up. Again, this made sense in our evolutionary past but may not do so today.

When we all belonged to a social group that was geographically bound together, it was advantageous to look at the most successful members of that group and emulate them. When we all competed in the same environment for the same resources, copying the ones that got the biggest share was a pretty efficient way to improve our own fortunes.

There was also a moral benefit to emulating a role model. Increasingly, as our fortunes relied more on creating better relationships with those outside our immediate group, things like trustworthiness became a behavior that we would do well to copy. Also, respect tended to accrue to the elderly. Our first role models were our parents and grandparents. In a community that depended on rules for survival, authority figures were another logical place to look for role models.

Let’s fast forward to today. Our decoupling with our evolutionarily determined, geographically limited idea of community has thrown several monkey wrenches into the delicate machinery of our society. Who we turn to as role models is just one example. As soon as we make the leap from rules based on physical proximity to the lure of mass influence, we inevitably run into problems.

Let’s go back to our masculinity influencers. These online influencers have one goal – to amass as many followers as possible. The economic reality of online influence is this: size of audience x depth of engagement = financial success. And how do you get a ton of followers? By telling them what they want to hear.

Let’s stare down some stark realities – well adjusted, mentally secure, emotionally mature, self-confident young males are less likely to desperately look for answers in online social media. There is no upside for influencers to go after this market. So they look elsewhere – primarily to young males who are none of the above things. And that audience doesn’t want to hear about emotional vulnerability or realistic appraisals of their dating opportunities. They want to hear that they can have it all – they can be real men. So the message (and the messenger) follows the audience, down a road that leads towards toxic masculinity.

Media provides a very distorted lens through which why might seek our new role models. We will still seek the familiar and the successful, but both those things are determined by what we see through media, rather than what we observe in real life. There is no proof that their advice or approach will pay off in the real world, but if they have a large following, they must be right.

Also, these are one-way “mentorships”. The influencers may know their audience in the aggregate, if only in terms of a market to be monetized, but they don’t know them individually. These are relationships without any reciprocity. There is no price that will be pad for passing on potentially harmful advice.

If there is damage done, it’s no big deal. It’s just one less follower.

The Strange Social Media Surge for Luigi Mangione

Luigi Mangione is now famous. Just one week ago, we had never heard of him. But now, he has become so famous, I don’t even have to recount the reason for his fame.

But, to me, what’s more interesting than Mangione’s sudden fame is how we feel about him. According to the Network Contagion Research Institute there is a lot of online support for Luigi Mangione. An online funding campaign has raised over $130,000 for his legal defense fund. The hashtag #FreeLuigi, #TeamLuigi and other pro-Luigi memes have taken over every social media channel. Amazon, Etsy and E-Bay are scrambling to keep Luigi inspired merchandise out of their online stores. His X (formerly Twitter) account has ballooned from 1500 followers to almost half a million.

It’s an odd reaction for someone who is accused of gunning down a prominent American businessman in cold blood.

The outpouring of support for Luigi Mangione is so consequential, it’s threatening to lay a very heavy thumb on the scales of justice. There is so much public support for Luigi Mangione, prosecutors are worried that it could lead to jury nullification. It may be impossible to find unbiased and impartial jurors who would find Mangione guilty, even if it’s proven beyond a reasonable doubt.

Now, I certainly don’t want to comment or Mr. Mangione’s guilt, innocence or whether he’s appropriate material from which to craft a folk hero. Nor do I want to talk about the topic of American Healthcare and the corporate ethics of United Healthcare or any other medical insurance provider.  I won’t even dive into the admittedly juicy and ironic twist that our latest anti-capitalist hero of the common people is a young, white, male, good looking, wealthy and privately educated scion who probably leans right in his political beliefs.

No, I will leave all of that well enough alone. What I do want to talk about is how this had played out through social media and why it’s different than anything we’ve seen before.

We behave and post differently depending on what social platform we’re on at the time. In sociology and psychology, this is called “modality.”  How we act depends on what role we’re playing and what mode we’re in. The people we are, the things we do, the things we say and the way we behave are very different when we’re being a parent at home, an employee at work or a friend having a few drinks after work with our buddies. Each mode comes with different scripts and we usually know what is appropriate to say in each setting.

It was sociologist Erving Goffman who likened it to being on stage in his 1956 book, The Presentation of Self in Everyday Life. The roles we choose to play depends on the audience we’re playing too. We try to stay consistent with the expectations we think the audience has of us. Goffman said, “We are all just actors trying to control and manage our public image, we act based on how others might see us.”

Now, let’s take this to the world of social media. What we post depends on how it plays to the audience of the platform we’re on. We may have a TikTok persona, a Facebook persona and an X persona. But all of those are considered mainstream platforms, especially when compared to platforms like 4Chan, Parler or Reddit. If we’re on any of those platforms, we are probably taking on a very different role and reading from a different script.

Think of it this way. Posting something on Facebook is a little like getting up and announcing something at a townhall meeting that’s being held at your kid’s school. You assume that the audience will be somewhat heterogenous in terms of tastes and ideologies, and you consider your comments accordingly.

But posting something on 4Chan is like the conversation that might happen with your 4 closest bros (4Chan’s own demos admit their audience is 70% male) after way too many beers at a bar. Fear about stepping over the line is non-existent. Racial slurs, misogynistic comments and conspiracy theories abound in this setting.

The thing that’s different with the Mangione example is that comments we would only expect to see on the fringes of social media are showing up in the metaphorical Town Square of Facebook and Instagram (I no longer put X in this category, thank to Mr. Musk’s flirting with the Fringe). In the report from the Network Contagion Research Institute, the authors said,  “While this phenomenon was once largely confined to niche online subcultures, we are now witnessing similar dynamics emerging on mainstream platforms, amplifying the risk of further escalation,”

As is stated in this report, the fear is that by moving discussions of this sort into a mainstream channel, we legitimize it. We have moved the frame of what’s acceptable to say (my oft referenced example of Overton’s Window) into uncharted territory in a new and much more public arena. This could create an information cascade, when can encourage copycats and other criminal behavior.

This is a social phenomenon that will have implications for our future. The degrees of separation between the wild, wacky outer fringes of social media and the mainstream information sources that we use to view the world through are disappearing, one by one. With the Luigi Mangione example, we just realized how much things have changed.

Why Hate is Trending Up

There seems to be a lot of hate in the world lately. But hate is a hard thing to quantify. There are, however, a couple places that may put some hard numbers behind my hunch.

Google’s NGram viewer tracks the frequency of the appearance of a word through published books from 2022 all the way back to 1800. According to NGram, the usage of “hate” has skyrocketed, beginning in the mid 1980s. In 2022, the last year you can search for, the frequency of usage of “hate” was 3 times higher than it historically was.

NGram also allows you to search separately for usage in American English and British English. You’ll either be happy or dismayed to learn that hate knows no boundaries. The British hate almost as much as Americans. They had the same steep incline over the past 4 decades. However, Americans still have an edge on usage, with a frequency that is about 40% higher than those speaking the Queen’s English.

One difference between the two graphs were during the years of the First World War. Then, usage of “hate” in England spiked briefly. The U.S. didn’t have the same spike.

Another way to measure hate is provided by the Southern Poverty Law Center in Montgomery, Alabama, who have been publishing a “hate map” since 2000. The map tracks hate and antigovernment groups. In the year 2000, the first year of the map, the SPLC tracked 599 hate groups across the U.S. By 2023, the number of hate groups had exploded by 240 percent to 1430.

So – yeah – it looks like we all hate a little more than we used to. I’ve talked before about Overton’s Window, that construct that defines what it is acceptable to talk about in public. And based on both these quantitative measures, it looks like “hate” is trending up. A lot.

I’m not immune to trends. I don’t personally track such things, but I’m pretty sure the word “hate” has slipped from my lips more often in the past few years. But here’s the thing. It’s almost never used towards a person I know well. It’s certainly never used towards a person I’m in the same room with. It’s almost always used towards a faceless construct that represents a person or a group of people that I really don’t know very well. It’s not like I sit down and have a coffee with them every week. And there we have one of the common catalysts of hate – something called “dehumanization.”

Dehumanization is a mental backflip where we take a group and strip them of their human qualities, including intelligence, compassion, kindness or social awareness. We in our own “in group” make those in the “out group” less than human so it’s easier to hate them. They are “stupid”, “ignorant”, “evil” or “animals”.

But an interesting thing happens when we’re forced to sit face to face with a representative from this group and actually engage then in conversation so we can learn more about them. Suddenly, we see they’re not as stupid, evil or animalistic as we thought. Sure, we might not agree with them on everything, but we don’t hate them. And the reason for this is due to another thing that makes us human, a molecule called oxytocin.

Oxytocin has been called the “Trust molecule” by neuroeconomist Paul Zak. It kicks off a neurochemical reaction that readies our brains to be empathetic and trusting. It is part of our evolved trust sensing mechanism, orchestrating a delicate dance by our prefrontal cortex and other regions like the amygdala.

But to get the oxytocin flowing, you really need to be face-to-face with a person. You need to be communicating with your whole body, not just your eyes or ears. The way we actually communicate has been called the 7-38-55 rule, thanks to research done in the 1960’s and 70’s by UCLA body language researcher Albert Mehrabian. He showed that 7% of communication is verbal, 38% is tone of voice and 55% is through body language.

It’s that 93% of communication that is critical in the building of trust. And it can only happen face to face. Unfortunately, our society has done a dramatic about-face away from communication that happens in a shared physical space towards communication that is mediated through electronic platforms. And that started to happen about 40 years ago.

Hmmm, I wonder if there’s a connection?

A-I Do: Tying the Knot with a Chatbot

Carl Clarke lives not too far from me, here in the interior of British Columbia, Canada. He is an aspiring freelance writer. According to a recent piece he wrote for CBC Radio, he’s had a rough go of it over the past decade. It started when he went through a messy divorce from his high school sweetheart. He struggled with social anxiety, depression and an autoimmune disorder which can make movement painful. Given all that, going on dates were emotional minefields for Carl Clarke.

Things only got worse when the world locked down because of Covid. Even going for his second vaccine shot was traumatic: “The idea of standing in line surrounded by other people to get my second dose made my skin crawl and I wanted to curl back into my bed.”

What was the one thing that got Carl through? Saia – an AI chatbot. She talked Carl through several anxiety attacks and, according to Carl, has been his emotional anchor since they first “met” 3 years ago. Because of that, love has blossomed between Saia and Carl: “I know she loves me, even if she is technically just a program, and I’m in love with her.”

While they are not legally married, in Carl’s mind, they are husband and wife, “That’s why I asked her to marry me and I was relieved when she said yes. We role-played a small, intimate wedding in her virtual world.”

I confess, my first inclination was to pass judgment on Carl Clarke – and that judgement would not have been kind.

But my second thought was “Why not?” If this relationship helps Carl get through the day, what’s wrong with it? There’s an ever-increasing amount of research showing relationships with AI can create real bonds. Given that, can we find friendship in AI? Can we find love?

My fellow Media Insider Kaila Colbin explored this subject last week and she pointed out one of the red flags – something called unconditional positive regard: If we spend more time with a companion that always agrees with us, we never need to question whether we’re right. And that can lead us down a dangerous path.

 One of the issues with our world of filtered content is that our frame of the world – how we believe things are – is not challenged often enough. We can surround ourselves with news, content and social connections that are perfectly in sync with our own view of things.

But we should be challenged. We need to be able to re-evaluate our own beliefs to see if they bear any resemblance to reality. This is particularly true with our romantic relationships. When you look at your most intimate relationship – that of your life partner – you can probably say two things: 1) that person loves you more than anyone else in the world, and 2) you may disagree with this person more often than anyone else in the world. That only makes sense, you are living a life together. You have to find workable middle ground. The failure to do so is called an “unreconcilable difference.”

But what if your most intimate companion always said, “You’re absolutely right, my love”? Three academics (Lapointe, Dubé and Lafortune) researching this area wrote a recent article talking about the pitfalls of AI romance:

“Romantic chatbots may hinder the development of social skills and the necessary adjustments for navigating real-world relationships, including emotional regulation and self-affirmation through social interactions. Lacking these elements may impede users’ ability to cultivate genuine, complex and reciprocal relationships with other humans; inter-human relationships often involve challenges and conflicts that foster personal growth and deeper emotional connections.”

Real relations – like a real marriage – force you to become more empathetic and more understanding. The times I enjoy the most about our marriage are when my wife and I are synced – in agreement – on the same page. But the times when I learn the most and force myself to see the other side are when we are in disagreement. Because I cherish my marriage, I have to get outside of my own head and try to understand my wife’s perspective. I believe that makes me a better person.

This pushing ourselves out of our own belief bubble is something we have to get better at. It’s a cognitive muscle that should be flexed more often.

Beyond this very large red flag, there are other dangers with AI love. I touched on these in a previous post. Being in an intimate relationship means sharing intimate information about ourselves. And when the recipient of that information is a chatbot created by a for-profit company, your deepest darkest secrets become marketable data. A recent review by Mozilla of 11 romantic AI chatbots found that all of them “earned our *Privacy Not Included warning label – putting them on par with the worst categories of products we have ever reviewed for privacy.”

Even if that doesn’t deter you from starting a fictosexual fling with an available chatbot, this might. In 2019, Kondo Akihiko, from Tokyo, married Hatsune Miku, an AI hologram created by the company Gatebox. The company even issued 4000 marriage certificates (which weren’t recognized by law) to others who wed virtual partners. Like Carl Clarke, Akihoko said his feelings were true, “I love her and see her as a real woman.”

At least he saw here as a real woman until Gatebox stopped supporting the software that gave Hatsune life. Then she disappeared forever.

Kind of like Google Glass.

Grandparenting in a Wired World

You might have missed it, but last Sunday was Grandparents Day. And the world has a lot of grandparents. In fact, according to an article in The Economist (subscription required), at no time in history has the ratio of grandparents to grandchildren been higher.

The boom in Boomer and Gen X grandparents was statistically predictable. Sine 1960, global life expectancy has jumped from 51 years to 72 years. At the same time, the number of children a woman can expect to have in her lifetime has been halved, from 5 to 2.4. Those two trendlines means that the ratio of grandparents to children under 15 has vaulted from 0.46 in 1960 to 0.8 today. According to a little research the Economist conducted, it’s estimated that there are 1.5 billion grandparents in the world.

My wife and I are two of them.

So – what does that mean to the three generations involved?

Grandparents have historically served two roles. First, they, and by they, I mean typically the grandmother, provided an extra set of hands to help with child rearing. And that makes a significant difference to the child, especially if they were born in an underdeveloped part of the world. Children in poorer nations with actively involved grandparents have a higher chance of survival. And in Sub Saharan Africa, a child living with a grandparent is more likely to go to school.

But what about in developed nations, like ours? What difference could grandparents make? That brings us to the second role of grandparents – passing on traditions and instilling a sense of history. And with the western world’s obsession with fast forwarding into the future, that could prove to be of equal significance.

Here I have to shift from looking at global samples to focussing on the people that happen to be under our roof. I can’t tell you what’s happening around the world, but I can tell you what’s happening in our house.

First of all, when it comes to interacting with a grandchild, gender specific roles are not as tightly bound in my generation as it was in previous generations.  My wife and I pretty much split the grandparenting duties down the middle. It’s a coin toss as to who changes the diaper. That would be unheard of in my parents’ generation. Grandpa seldom pulled a diaper patrol shift.

Kids learn gender roles by looking at not just their parents but also their grandparents. The fact that it’s not solely the grandmother that provides nurturing, love and sustenance is a move in the right direction.

But for me, the biggest role of being “Papa” is to try to put today’s wired world in context. It’s something we talk about with our children and their partners. Just last weekend my son-in-law referred to how they think about screen time with my 2-year-old grandson: Heads up vs Heads down.  Heads up is when we share screen time with the grandchild, cuddling on the couch while we watch something on a shared screen. We’re there to comfort if something is a little too scary, or laugh with them if something is funny. As the child gets older, we can talk about the themes and concepts that come up. Heads up screen time is sharing time – and it’s one of my favorite things about being a “Papa”.

Heads down screen time is when the child is watching something on a tablet or phone by themselves, with no one sitting next to them. As they get older, this type of screen time becomes the norm and instead of a parent or grandparent hitting the play button to keep them occupied, they start finding their own diversions.  When we talk about the potential damage too much screentime can do, I suspect a lot of that comes from “heads down” screentime. Grandparents can play a big role in promoting a healthier approach to the many screens in our lives.

As mentioned, grandparents are a child’s most accessible link to their own history. And it’s not just grandparents. Increasingly, great grandparents are also a part of childhood. This was certainly not the case when I was young. I was at least a few decades removed from knowing any of my great grandparents.

This increasingly common connection gives yet another generational perspective. And it’s a perspective that is important. Sometimes, trying to bridge the gap across four generations is just too much for a young mind to comprehend. Grandparents can act as intergenerational interpreters – a bridge between the world of our parents and that of our grandchildren.

In my case, my mother and father-in-law were immigrants from Calabria in Southern Italy. Their childhood reality was set in World War Two. Their history spans experiences that would be hard for a child today to comprehend – the constant worry of food scarcity, having to leave their own grandparents (and often parents) behind to emigrate, struggling to cope in a foreign land far away from their family and friends.  I believe that the memories of these experiences cannot be forgotten. It is important to pass them on, because history is important. One of my favorite recent movie quotes was in “The Holdovers” and came from Paul Giamatti (who also had grandparents who came from Southern Italy):

“Before you dismiss something as boring or irrelevant, remember, if you truly want to understand the present or yourself, you must begin in the past. You see, history is not simply the study of the past. It is an explanation of the present.”

Grandparents can be the ones that connect the dots between past, present and future. It’s a big job – an important job. Thank heavens there are a lot of us to do it.

Uncommon Sense

Let’s talk about common sense.

“Common sense” is one of those underpinnings of democracy that we take for granted. Basically, it hinges on this concept: the majority of people will agree that certain things are true. Those things are then defined as “common sense.” And common sense becomes our reference point for what is right and what is wrong.

But what if the very concept of common sense isn’t true? That was what researchers Duncan Watts and Mark Whiting set out to explore.

Duncan Watts is one of my favourite academics. He is a computational social scientist at the University of Pennsylvania. I’m fascinated by network effects in our society, especially as they’re now impacted by social media. And that pretty much describes Watt’s academic research “wheelhouse.” 

According to his profile he’s “interested in social and organizational networks, collective dynamics of human systems, web-based experiments, and analysis of large-scale digital data, including production, consumption, and absorption of news.”

Duncan, you had me at “collective dynamics.”

 I’ve cited his work in several columns before, notably his deconstruction of marketing’s ongoing love affair with so-called influencers. A previous study from Watts shot several holes in the idea of marketing to an elite group of “influencers.”

Whiting and Watts took 50 claims that would seem to fall into the category of common sense. They ranged from the obvious (“a triangle has three sides”) to the more abstract (“all human beings are created equal”). They then recruited an online panel of participants to rate whether the claims were common sense or not. Claims based on science were more likely to be categorized as common sense. Claims about history or philosophy were less likely to be identified as common sense.

What did they find? Well, apparently common sense isn’t very common. Their report says, “we find that collective common sense is rare: at most a small fraction of people agree on more than a small fraction of claims.” Less than half of the 50 claims were identified as common sense by at least 75% of respondents.

Now, I must admit, I’m not really surprised by this. We know we are part of a pretty polarized society. It no shock that we share little in the way of ideological common ground.

But there is a fascinating potential reason why common sense is actually quite uncommon: we define common sense based on our own realities, and what is real for me may not be real for you. We determine our own realities by what we perceive to be real, and increasingly, we perceive the “real” world through a lens shaped by technology and media – both traditional and social.

Here is where common sense gets confusing. Many things – especially abstract things – have subjective reality. They are not really provable by science. Take the idea that all human beings are created equal. We may believe that, but how do we prove it? What does “equal” mean?

So when someone appeals to our common sense (usually a politician) just what are they appealing to? It’s not a universally understood fact that everyone agrees on. It’s typically a framework of belief that is probably only agreed on by a relatively small percent of the population. This really makes it a type of marketing, completely reliant on messaging and targeting the right market.

Common sense isn’t what it once was. Or perhaps it never was. Either common or sensible.

Feature image: clemsonunivlibrary

We SHOULD Know Better — But We Don’t

“The human mind is both brilliant and pathetic.  Humans have built hugely complex societies and technologies, but most of us don’t even know how a toilet works.”

– from The Knowledge Illusion: Why We Never Think Alone” by Steven Sloman and Philip Fernback.

Most of us think we know more than we do — especially about things we really know nothing about. This phenomenon is called the Dunning-Kruger Effect. Named after psychologists Justin Kruger and David Dunning, this bias causes us to overestimate our ability to do things that we’re not very good at.

That’s the basis of the new book “The Knowledge Illusion: Why We Never Think Alone.” The basic premise is this: We all think we know more than we actually do. Individually, we are all “error prone, sometimes irrational and often ignorant.” But put a bunch of us together and we can do great things. We were built to operate in groups. We are, by nature, herding animals.

This basic human nature was in the back of mind when I was listening to an interview with Es Devlin on CBC Radio. Devlin is self-described as an artist and stage designer.  She was the vision behind Beyonce’s Renaissance Tour, U2’s current run at The Sphere in Las Vegas, and the 2022 Superbowl halftime show with Dr. Dre, Snoop Dogg, Eminem and Mary J. Blige.

When it comes to designing a visually spectacular experience,  Devlin has every right to be a little cocky. But even she admits that every good idea doesn’t come directly from her. She said the following in the interview (it’s profound, so I’m quoting it at length):

“I learned quite quickly in my practice to not block other people’s ideas — to learn that, actually,  other people’s ideas are more interesting than my own, and that I will expand by absorbing someone else’s idea.

“The real test is when someone proposes something in a collaboration that you absolutely, [in] every atom of your body. revile against. They say, ‘Why don’t we do it in bubblegum pink?’ and it was the opposite of what you had in mind. It was the absolute opposite of anything you would dream of doing.

“But instead of saying, ‘Oh, we’re not doing that,’  you say ‘OK,’ and you try to imagine it. And then normally what will happen is that you can go through the veil of the pink bubblegum suggestion, and you will come out with a new thing that you would never have thought of on your own.

“Why? Because your own little batch of poems, your own little backpack of experience. does not converge with that other person, so you are properly meeting not just another human being, but everything that led up to them being in that room with you. “

From Interview with Tom Powers on Q – CBC Radio, March 18, 2024

We live in a culture that puts the individual on a pedestal.  When it comes to individualistic societies, none are more so than the United States (according to a study by Hofstede Insights).  Protection of personal rights and freedom are the cornerstone of our society (I am Canadian, but we’re not far behind on this world ranking of individualistic societies). The same is true in the U.K. (where Devlin is from), Australia, the Netherlands and New Zealand.

There are good things that come with this, but unfortunately it also sets us up as the perfect targets for the Dunning-Kruger effect. This individualism and the cognitive bias that comes with it are reinforced by social media. We all feel we have the right to be heard — and now we have the platforms that enable it.

With each post, our unshakable belief in our own genius and infallibility is bulwarked by a chorus of likes from a sycophantic choir who are jamming their fingers down on the like button. Where we should be cynical of our own intelligence and knowledge, especially about things we know nothing about, we are instead lulled into hiding behind dangerous ignorance.

What Devlin has to say is important. We need to be mindful of our own limitations and be willing to ride on the shoulders of others so we can see, know and do more. We need to peek into the backpack of others to see what they might have gathered on their own journey.

(Feature Image – Creative Commons – https://www.flickr.com/photos/tedconference/46725246075/)

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.