Grandparenting in a Wired World

You might have missed it, but last Sunday was Grandparents Day. And the world has a lot of grandparents. In fact, according to an article in The Economist (subscription required), at no time in history has the ratio of grandparents to grandchildren been higher.

The boom in Boomer and Gen X grandparents was statistically predictable. Sine 1960, global life expectancy has jumped from 51 years to 72 years. At the same time, the number of children a woman can expect to have in her lifetime has been halved, from 5 to 2.4. Those two trendlines means that the ratio of grandparents to children under 15 has vaulted from 0.46 in 1960 to 0.8 today. According to a little research the Economist conducted, it’s estimated that there are 1.5 billion grandparents in the world.

My wife and I are two of them.

So – what does that mean to the three generations involved?

Grandparents have historically served two roles. First, they, and by they, I mean typically the grandmother, provided an extra set of hands to help with child rearing. And that makes a significant difference to the child, especially if they were born in an underdeveloped part of the world. Children in poorer nations with actively involved grandparents have a higher chance of survival. And in Sub Saharan Africa, a child living with a grandparent is more likely to go to school.

But what about in developed nations, like ours? What difference could grandparents make? That brings us to the second role of grandparents – passing on traditions and instilling a sense of history. And with the western world’s obsession with fast forwarding into the future, that could prove to be of equal significance.

Here I have to shift from looking at global samples to focussing on the people that happen to be under our roof. I can’t tell you what’s happening around the world, but I can tell you what’s happening in our house.

First of all, when it comes to interacting with a grandchild, gender specific roles are not as tightly bound in my generation as it was in previous generations.  My wife and I pretty much split the grandparenting duties down the middle. It’s a coin toss as to who changes the diaper. That would be unheard of in my parents’ generation. Grandpa seldom pulled a diaper patrol shift.

Kids learn gender roles by looking at not just their parents but also their grandparents. The fact that it’s not solely the grandmother that provides nurturing, love and sustenance is a move in the right direction.

But for me, the biggest role of being “Papa” is to try to put today’s wired world in context. It’s something we talk about with our children and their partners. Just last weekend my son-in-law referred to how they think about screen time with my 2-year-old grandson: Heads up vs Heads down.  Heads up is when we share screen time with the grandchild, cuddling on the couch while we watch something on a shared screen. We’re there to comfort if something is a little too scary, or laugh with them if something is funny. As the child gets older, we can talk about the themes and concepts that come up. Heads up screen time is sharing time – and it’s one of my favorite things about being a “Papa”.

Heads down screen time is when the child is watching something on a tablet or phone by themselves, with no one sitting next to them. As they get older, this type of screen time becomes the norm and instead of a parent or grandparent hitting the play button to keep them occupied, they start finding their own diversions.  When we talk about the potential damage too much screentime can do, I suspect a lot of that comes from “heads down” screentime. Grandparents can play a big role in promoting a healthier approach to the many screens in our lives.

As mentioned, grandparents are a child’s most accessible link to their own history. And it’s not just grandparents. Increasingly, great grandparents are also a part of childhood. This was certainly not the case when I was young. I was at least a few decades removed from knowing any of my great grandparents.

This increasingly common connection gives yet another generational perspective. And it’s a perspective that is important. Sometimes, trying to bridge the gap across four generations is just too much for a young mind to comprehend. Grandparents can act as intergenerational interpreters – a bridge between the world of our parents and that of our grandchildren.

In my case, my mother and father-in-law were immigrants from Calabria in Southern Italy. Their childhood reality was set in World War Two. Their history spans experiences that would be hard for a child today to comprehend – the constant worry of food scarcity, having to leave their own grandparents (and often parents) behind to emigrate, struggling to cope in a foreign land far away from their family and friends.  I believe that the memories of these experiences cannot be forgotten. It is important to pass them on, because history is important. One of my favorite recent movie quotes was in “The Holdovers” and came from Paul Giamatti (who also had grandparents who came from Southern Italy):

“Before you dismiss something as boring or irrelevant, remember, if you truly want to understand the present or yourself, you must begin in the past. You see, history is not simply the study of the past. It is an explanation of the present.”

Grandparents can be the ones that connect the dots between past, present and future. It’s a big job – an important job. Thank heavens there are a lot of us to do it.

Uncommon Sense

Let’s talk about common sense.

“Common sense” is one of those underpinnings of democracy that we take for granted. Basically, it hinges on this concept: the majority of people will agree that certain things are true. Those things are then defined as “common sense.” And common sense becomes our reference point for what is right and what is wrong.

But what if the very concept of common sense isn’t true? That was what researchers Duncan Watts and Mark Whiting set out to explore.

Duncan Watts is one of my favourite academics. He is a computational social scientist at the University of Pennsylvania. I’m fascinated by network effects in our society, especially as they’re now impacted by social media. And that pretty much describes Watt’s academic research “wheelhouse.” 

According to his profile he’s “interested in social and organizational networks, collective dynamics of human systems, web-based experiments, and analysis of large-scale digital data, including production, consumption, and absorption of news.”

Duncan, you had me at “collective dynamics.”

 I’ve cited his work in several columns before, notably his deconstruction of marketing’s ongoing love affair with so-called influencers. A previous study from Watts shot several holes in the idea of marketing to an elite group of “influencers.”

Whiting and Watts took 50 claims that would seem to fall into the category of common sense. They ranged from the obvious (“a triangle has three sides”) to the more abstract (“all human beings are created equal”). They then recruited an online panel of participants to rate whether the claims were common sense or not. Claims based on science were more likely to be categorized as common sense. Claims about history or philosophy were less likely to be identified as common sense.

What did they find? Well, apparently common sense isn’t very common. Their report says, “we find that collective common sense is rare: at most a small fraction of people agree on more than a small fraction of claims.” Less than half of the 50 claims were identified as common sense by at least 75% of respondents.

Now, I must admit, I’m not really surprised by this. We know we are part of a pretty polarized society. It no shock that we share little in the way of ideological common ground.

But there is a fascinating potential reason why common sense is actually quite uncommon: we define common sense based on our own realities, and what is real for me may not be real for you. We determine our own realities by what we perceive to be real, and increasingly, we perceive the “real” world through a lens shaped by technology and media – both traditional and social.

Here is where common sense gets confusing. Many things – especially abstract things – have subjective reality. They are not really provable by science. Take the idea that all human beings are created equal. We may believe that, but how do we prove it? What does “equal” mean?

So when someone appeals to our common sense (usually a politician) just what are they appealing to? It’s not a universally understood fact that everyone agrees on. It’s typically a framework of belief that is probably only agreed on by a relatively small percent of the population. This really makes it a type of marketing, completely reliant on messaging and targeting the right market.

Common sense isn’t what it once was. Or perhaps it never was. Either common or sensible.

Feature image: clemsonunivlibrary

We SHOULD Know Better — But We Don’t

“The human mind is both brilliant and pathetic.  Humans have built hugely complex societies and technologies, but most of us don’t even know how a toilet works.”

– from The Knowledge Illusion: Why We Never Think Alone” by Steven Sloman and Philip Fernback.

Most of us think we know more than we do — especially about things we really know nothing about. This phenomenon is called the Dunning-Kruger Effect. Named after psychologists Justin Kruger and David Dunning, this bias causes us to overestimate our ability to do things that we’re not very good at.

That’s the basis of the new book “The Knowledge Illusion: Why We Never Think Alone.” The basic premise is this: We all think we know more than we actually do. Individually, we are all “error prone, sometimes irrational and often ignorant.” But put a bunch of us together and we can do great things. We were built to operate in groups. We are, by nature, herding animals.

This basic human nature was in the back of mind when I was listening to an interview with Es Devlin on CBC Radio. Devlin is self-described as an artist and stage designer.  She was the vision behind Beyonce’s Renaissance Tour, U2’s current run at The Sphere in Las Vegas, and the 2022 Superbowl halftime show with Dr. Dre, Snoop Dogg, Eminem and Mary J. Blige.

When it comes to designing a visually spectacular experience,  Devlin has every right to be a little cocky. But even she admits that every good idea doesn’t come directly from her. She said the following in the interview (it’s profound, so I’m quoting it at length):

“I learned quite quickly in my practice to not block other people’s ideas — to learn that, actually,  other people’s ideas are more interesting than my own, and that I will expand by absorbing someone else’s idea.

“The real test is when someone proposes something in a collaboration that you absolutely, [in] every atom of your body. revile against. They say, ‘Why don’t we do it in bubblegum pink?’ and it was the opposite of what you had in mind. It was the absolute opposite of anything you would dream of doing.

“But instead of saying, ‘Oh, we’re not doing that,’  you say ‘OK,’ and you try to imagine it. And then normally what will happen is that you can go through the veil of the pink bubblegum suggestion, and you will come out with a new thing that you would never have thought of on your own.

“Why? Because your own little batch of poems, your own little backpack of experience. does not converge with that other person, so you are properly meeting not just another human being, but everything that led up to them being in that room with you. “

From Interview with Tom Powers on Q – CBC Radio, March 18, 2024

We live in a culture that puts the individual on a pedestal.  When it comes to individualistic societies, none are more so than the United States (according to a study by Hofstede Insights).  Protection of personal rights and freedom are the cornerstone of our society (I am Canadian, but we’re not far behind on this world ranking of individualistic societies). The same is true in the U.K. (where Devlin is from), Australia, the Netherlands and New Zealand.

There are good things that come with this, but unfortunately it also sets us up as the perfect targets for the Dunning-Kruger effect. This individualism and the cognitive bias that comes with it are reinforced by social media. We all feel we have the right to be heard — and now we have the platforms that enable it.

With each post, our unshakable belief in our own genius and infallibility is bulwarked by a chorus of likes from a sycophantic choir who are jamming their fingers down on the like button. Where we should be cynical of our own intelligence and knowledge, especially about things we know nothing about, we are instead lulled into hiding behind dangerous ignorance.

What Devlin has to say is important. We need to be mindful of our own limitations and be willing to ride on the shoulders of others so we can see, know and do more. We need to peek into the backpack of others to see what they might have gathered on their own journey.

(Feature Image – Creative Commons – https://www.flickr.com/photos/tedconference/46725246075/)

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.

Greetings from the Great, White (Frozen) North

This post comes to you from Edmonton, Alberta, where the outside temperature right now is minus forty degrees Celsius. If you’re wondering what that is in Fahrenheit, the answer is, “It doesn’t matter.” Minus forty is where the two scales match up.

If you add a bit of a breeze to that, you get a windchill factor that makes it feel like minus fifty Celsius (-58° F). The weather lady on the morning news just informed me that at that temperature, exposed flesh freezes in two to five minutes. Yesterday, an emergency alert flashed on my phone warning us that Alberta’s power grid was overloaded and could collapse under the demand, causing rotating power outages.

I don’t know about you, but I don’t think anyone should live in a place where winter can kill you. Nothing works as it should when it gets this cold, humans included. And yet, Albertans are toughing it out. I noticed that when it gets this cold, the standard niceties that people say change. Instead of telling me to “have a nice day,” everyone has been encouraging me to “stay warm.”

There’s a weird sort of bonding that happens when the weather becomes the common enemy. Maybe we all become brothers and sisters in arms, struggling to survive against the elements. It got me to wondering: Is there a different sense of community in places where it’s really cold in the winter?

When I asked Google which countries had the strongest social ties, it gave me a list of nine: Finland, Norway, Canada, Denmark, Switzerland, Australia, Netherlands, Iceland and Italy. Seven of those places have snowy, cold winters. If you look at countries that have strong social democracies — governments established around the ideal of the common good — again, you’ll find that most of them are well north (or south, in the case of New Zealand) of the equator.

But let’s leave politics aside. Maybe it’s just the act of constantly transitioning from extreme cold to warm and cozy places where there’s a friendly face sincerely wishing you’ll “stay warm” that builds stronger social bonds. As I mentioned in a previous post, the Danes even have a name for it: hygge. It translates loosely to “coziness.”

There are definitely physical benefits to going from being really cold to being really warm. The Finns discovered this secret thousands of years ago when they created the sauna. The whole idea is to repeatedly go from a little hut where the temperature hovers around 80-90° C (176-194° F) to then jump through a hole you’ve cut in the ice into waters barely above freezing. A paper from the Mayo Clinic lists the health benefits of saunas in a rather lengthy paragraph, touching on everything from reducing inflammation to clearer skin to fighting the flu. 

But the benefits aren’t just physical. Estonia, which is just south of Finland, also has a strong sauna culture. A brilliant documentary by Anna Hints, “Smoke Sauna Sisterhood,” shows that the sauna can be a sacred space. As Estonia’s official submission to the Oscars, it’s in contention for a nomination.

Hints’ documentary shows that saunas can touch us on a deeply spiritual level, healing scars that can build up through our lives. There is something in the cycle of heat and cold that taps into inner truths. As Hints said in a recent interview, “With time, deeper, deeper layers of physical dirt start to come up to the surface, but also emotional dirt starts to come up to the surface.”

While I didn’t visit any saunas on my Edmonton trip, every time I ventured outside it was a hot-cold adventure. Everyone turns the thermostat up a little when it gets this cold, so you’re constantly going through doors where the temperature can swing 75 degrees (Celsius, 130 degrees Fahrenheit) in an instant. I don’t know if there’s a health benefit, but I can tell you it feels pretty damned good to get that warm welcome when you’re freezing your butt off.

Stay warm!

Harry, Meghan and the Curse of Celebrity

The new Netflix series on Harry and Meghan is not exactly playing out according to plan. A few weeks ago, MediaPost TV Columnist Adam Buckman talked about the series, which promised unprecedented intimate view into the lives of the wayward Royal and his partner; it’s aim being, “– to give the rest of us a full-access pass into every nook and cranny of the lives and minds of Harry and Meghan.”

Since then, reviews have been mixed. While it is (according to Netflix) their most watched documentary ever, the world seems to be responding with a collective yawn. It is certainly not turning out to be the PR boost the two were hoping for, at least based on some viewer reviews on Rotten Tomatoes. Here is just one sample: “A massive whinge fest based on a string of lies, half-truths, and distortions of reality from two of the most privileged people on the planet.”

What I found interesting in this is the complex concept of celebrity, and how it continues to evolve – or more accurately, devolve – in our culture. This is particularly true when we mix our attitudes of modern celebrity with the hoary construct of royalty.

If it does anything, I think Harry and Meghan shows how the very concept of celebrity has turned toxic and has poisoned whatever nominal value you may find in sustaining a monarchy. And, if we are going to dissect the creeping disease of celebrity, we must go to the root of the problem, the media, because our current concept of celebrity didn’t really exist before modern mass media.

We have evolved to keep an eye on those that are at the top of the societal pyramid. It was a good survival tactic to do so. Our apex figureheads – whether they be heroes or gods – served as role models; a literal case of monkey see, monkey do. But it also ensured political survival. There is a bucketload of psychology tucked up in our brains reinforcing this human trait.

In many mythologies, the line between heroes and gods was pretty fuzzy. Also, interestingly, gods were always carnal creatures. The Greek and Roman mythical gods and heroes ostensibly acted as both role models and moral cautionary tales. With great power came great hedonistic appetites.

This gradually evolved into royalty. With kings and queens, there was a very deliberate physical and societal distance kept between royalty and the average subject.  The messy bits of bad behavior that inevitably come with extreme privilege were always kept well hidden from the average subject.  It pretty much played out that way for thousands of years.

There was a yin and yang duality to this type of celebrity that evolved over time. If we trace the roots of the word notorious, we see the beginnings of this duality and get some hints of when it began to unravel.

Notorious comes from the latin notus – meaning to know. It’s current meaning, to be known for something negative, only started in the 17th century. It seems we could accept the duality of notoriety when it came to the original celebrities – our heroes and gods – but with the rise of Christianity and, later, Puritanism (which also hit its peak in the 17th century) we started a whitewash campaign on our own God’s image This had a trickle-down effect in a more strait-laced society. We held our heroes, our God, as well as our kings and queens to a higher standard. We didn’t want to think of them as carnal creatures.

Then, thanks to the media, things got a lot more complicated.

Up until the 19 century, there was really no thing as a celebrity the way we know them today. Those that care about such things generally agree that French actress Sarah Bernhardt was the first modern celebrity. She became such because she knew how to manipulate media. She was the first to get her picture in the press. She was able to tour the world, with the telegraph spreading the word before her arrival. As the 19th century drew to a close, our modern concept of celebrity as being born.

It took a while for this fascination with celebrity spilled over to monarchies. In the case of the house of Windsor (which is a made-up name. The actual name of the family was Saxe-Coburg – Gotha, a decidedly Germanic name that became problematic when England was at war with Germany in World War I) this problem came to a head rather abruptly with King Edward VIII. This was the first royal who revelled in celebrity and who tried to use the media to his advantage. The worlds of celebrity and royalty collided with his abdication in 1936.

In watching Harry and Meghan, I couldn’t help but recount the many, many collisions between celebrity and the Crown since then. The monarchy has always tried to control their image through the media and one can’t help feeling they have been hopelessly naïve in that attempt. Celebrity feeds on itself – it is the nature of the beast – and control is not an option.

Celebrity gives us the illusion of a false intimacy. We mistakenly believe we know the person who is famous, the same as we know those closest to us in our own social circle. We feel we have the right to judge them based on the distorted image we have of them that comes through the media. Somehow, we believe we know what motivates Harry and Meghan, what their ethics entail, what type of person they are.

I suppose one can’t fault Harry and Meghan for trying – yet again – to add their own narrative to the whirling pool of celebrity that surrounds them. But, if history is any indicator, it’s not really a surprise that it’s not going according to their plan.

The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

With Digital Friends Like These, Who Needs Enemies?

Recently, I received an email from Amazon that began:

“You’re amazing. Really, you’re awesome! Did that make you smile? Good. Alexa is here to compliment you. Just say, ‘Alexa, compliment me’”

“What,” I said to myself, “sorry-assed state is my life in that I need to depend on a little black electronic hockey puck to affirm my self-worth as a human being?”

I realize that the tone of the email likely had tongue at least part way implanted in cheek, but still, seriously – WTF Alexa? (Which, incidentally, Alexa also has covered. Poise that question and Alexa responds – “I’m always interested in feedback.”)

My next thought was, maybe I think this is a joke, but there are probably people out there that need this. Maybe their lives are dangling by a thread and it’s Alexa’s soothing voice digitally pumping their tires that keeps them hanging on until tomorrow. And – if that’s true – should I be the one to scoff at it?

I dug a little further into the question, “Can we depend on technology for friendship, for understanding, even – for love?”

The answer, it turns out, is probably yes.

A few studies have shown that we will share more with a virtual therapist than a human one in a face-to-face setting. We feel heard without feeling judged.

In another study, patients with a virtual nurse ended up creating a strong relationship with it that included:

  • Using close forms of greeting and goodbye
  • Expressing happiness to see the nurse
  • Using compliments
  • Engaging in social chat
  • And expressing a desire to work together and speak with the nurse again

Yet another study found that robots can even build a stronger relationship with us by giving us a pat on the hand or touching our shoulder. We are social animals and don’t do well when we lose that sociability. If we go too long without being touched, we experience something called “skin hunger” and start feeling stressed, depressed and anxious. The use of these robots is being tested in senior’s care facilities to help combat extreme loneliness.

In reading through these studies, I was amazed at how quickly respondents seemed to bond with their digital allies. We have highly evolved mechanisms that determine when and with whom we seem to place trust. In many cases, these judgements are based on non-verbal cues: body language, micro-expressions, even how people smell. It surprised me that when our digital friends presented none of these, the bonds still developed. In fact, it seems they were deeper and stronger than ever!

Perhaps it’s the very lack of humanness that is the explanation. As in the case of the success of a virtual therapist, maybe these relationships work because we can leave the baggage of being human behind. Virtual assistants are there to serve us, not judge or threaten us. We let our guards down and are more willing to open up.

Also, I suspect that the building blocks of these relationships are put in place not by the rational, thinking part of our brains but the emotional, feeling part. It’s been shown that self-affirmation works by activating the reward centers of our brain, the ventral striatum and ventromedial prefrontal cortex. These are not pragmatic, cautious parts of our cognitive machinery. As I’ve said before, they’re all gas and no brakes. We don’t think a friendship with a robot is weird because we don’t think about it at all, we just feel better. And that’s enough.

AI companionship seems a benign – even beneficial use of technology – but what might the unintended consequences be? Are we opening ourselves up to potential dangers by depending on AI for our social contact – especially when the lines are blurred between for-profit motives and affirmation we become dependent on.

In therapeutic use cases of virtual relationships as outlined up to now, there is no “for-profit” motive. But Amazon, Apple, Facebook, Google and the other providers of consumer directed AI companionship are definitely in it for the money. Even more troubling, two of those – Facebook and Google – depend on advertising for their revenue. Much as this gang would love us to believe that they only have our best interests in mind – over $1.2 trillion in combined revenue says otherwise. I suspect they have put a carefully calculated price on digital friendship.

Perhaps it’s that – more than anything – that threw up the red flags when I got that email from Amazon. It sounded like it was coming from a friend, and that’s exactly what worries me.

Does Social Media “Dumb Down” the Wisdom of Crowds?

We assume that democracy is the gold standard of sustainable political social contracts. And it’s hard to argue against that. As Winston Churchill said, “democracy is the worst form of government – except for all the others that have been tried.”

Democracy may not be perfect, but it works. Or, at least, it seems to work better than all the other options. Essentially, democracy depends on probability – on being right more often than we’re wrong.

At the very heart of democracy is the principle of majority rule. And that is based on something called Jury Theorem, put forward by the Marquis de Condorcet in his 1785 work, Essay on the Application of Analysis to the Probability of Majority Decisions. Essentially, it says that the probability of making the right decision increases when you average the decisions of as many people as possible. This was the basis of James Suroweicki’s 2004 book, The Wisdom of Crowds.

But here’s the thing about the wisdom of crowds – it only applies when those individual decisions are reached independently. Once we start influencing each other’s decision, that wisdom disappears. And that makes social psychologist Solomon Asch’s famous conformity experiments of 1951 a disturbingly significant fly in the ointment of democracy.

You’re probably all aware of the seminal study, but I’ll recap anyway. Asch gathered groups of people and showed them a card with three lines of obviously different lengths. Then he asked participants which line was the closest to the reference line. The answer was obvious – even a toddler can get this test right pretty much every time.

But unknown to the test subject, all the rest of the participants were “stooges” – actors paid to sometimes give an obviously incorrect answer. And when this happened, Asch was amazed to find that the test subjects often went against the evidence of their own eyes just to conform with the group. When wrong answers were given, a third of the subjects always conformed, 75% of the subjects conformed at least once, and only 25% stuck to the evidence in front of them and gave the right answer.

The results baffled Asch. The most interesting question to him was why this was happening. Were people making a decision to go against their better judgment – choosing to go with the crowd rather than what they were seeing with their own eyes? Or was something happening below the level of consciousness? This was something Solomon Asch wondered about right until his death in 1996. Unfortunately, he never had the means to explore the question further.

But, in 2005, a group of researchers at Emory University, led by Gregory Berns, did have a way. Here, Asch’s experiment was restaged, only this time participants were in a fMRI machine so Bern and his researchers could peak at what was actually happening in their brains. The results were staggering.

They found that conformity actually changes the way our brain works. It’s not that we change what we say to conform with what others are saying, despite what we see with our own eyes. What we see is changed by what others are saying.

If, Berns and his researchers reasoned, you were consciously making a decision to go against the evidence of your own eyes just to conform with the group, you should see activity in the frontal areas of our brain that are engaged in monitoring conflicts, planning and other higher-order mental activities.

But that isn’t what they found. In those participants that went along with obviously incorrect answers from the group, the parts of the brain that showed activity were only in the posterior parts of the brain – those that control spatial awareness and visual perception. There was no indication of an internal mental conflict. The brain was actually changing how it processed the information it was receiving from the eyes.

This is stunning. It means that conformity isn’t a conscious decision. Our desire to conform is wired so deeply in our brains, it actually changes how we perceive the world. We never have the chance to be objectively right, because we never realize we’re wrong.

But what about those that went resisted conformity and stuck to the evidence they were seeing with their own eyes? Here again, the results were fascinating. The researchers found that in these cases, they saw a spike of activity in the right amygdala and right caudate nucleus – areas involved in the processing of strong emotions, including fear, anger and anxiety. Those that stuck to the evidence of their own eyes had to overcome emotional hurdles to do so. In the published paper, the authors called this the “pain of independence.”

This study highlights a massively important limitation in the social contract of democracy. As technology increasingly imposes social conformity on our culture, we lose the ability to collectively make the right decision. Essentially, is shows that this effect not only erases the wisdom of crowds, but actively works against it by exacting an emotional price for being an independent thinker.

The Physical Foundations of Friendship

It’s no secret that I worry about what the unintended consequences might be for us as we increasingly substitute a digital world for a physical one. What might happen to our society as we spend less time face-to-face with people and more time face-to-face with a screen?

Take friendship, for example. I have written before about how Facebook friends and real friends are not the same thing. A lot of this has to do with the mental work required to maintain a true friendship. This cognitive requirement led British anthropologist Robin Dunbar to come up with something called Dunbar’s Number – a rough rule-of-thumb that says we can’t really maintain a network of more than 150 friends, give or take a few.

Before you say, “I have way more friends on Facebook than that,” realize that I don’t care what your Facebook Friend count is. Mine numbers at least 3 times more than Dunbar’s 150 limit. But they are not all true friends. Many are just the result of me clicking a link on my laptop. It’s quick, it’s easy, and there is absolutely no requirement to put any skin in the game. Once clicked, I don’t have to do anything to maintain these friendships. They are just part of a digital tally that persists until I might click again, “unfriending” them. Nowhere is the ongoing physical friction that demands the maintenance required to keep a true friendship from slipping into entropy.

So I was wondering – what is that magical physical and mental alchemy that causes us to become friends with someone in the first place? When we share physical space with another human, what is the spark that causes us to want to get to know them better? Or – on the flip side – what are the red flags that cause us to head for the other end of the room to avoid talking to them? Fortunately, there is some science that has addressed those questions.

We become friends because of something in sociology call homophily – being like each other. In today’s world, that leads to some unfortunate social consequences, but in our evolutionary environment, it made sense. It has to do with kinship ties and what ethologist Richard Dawkins called The Selfish Gene. We want family to survive to pass on our genes. The best way to motivate us to protect others is to have an emotional bond to them. And it just so happens that family members tend to look somewhat alike. So we like – or love – others who are like us.

If we tie in the impact of geography over our history, we start to understand why this is so. Geography that restricted travel and led to inbreeding generally dictated a certain degree of genetic “sameness” in our tribe. It was a quick way to sort in-groups from out-groups. And in a bloodier, less politically correct world, this was a matter of survival.

But this geographic connection works both ways. Geographic restrictions lead to homophily, but repeated exposure to the same people also increases the odds that you’ll like them. In psychology, this is called mere-exposure effect.

In these two ways, the limitations of a physical world has a deep, deep impact on the nature of friendship. But let’s focus on the first for a moment. 

It appears we have built-in “friend detectors” that can actually sense genetic similarities. In a rather fascinating study, Nicholas Christakis and James Fowler found that friends are so alike genetically, they could actually be family. If you drill down to the individual building blocks of a gene at the nucleotide level, your friends are as alike genetically to you as your fourth cousin. As Christakis and Fowler say in their study, “friends may be a kind of ‘functional kin’.”

This shows how deeply friendships bonds are hardwired into us. Of course, this doesn’t happen equally across all genes. Evolution is nothing if not practical. For example, Christakis and Fowler found that specific systems do stay “heterophilic” (not alike) – such as our immune system. This makes sense. If you have a group of people who stay in close proximity to each other, it’s going to remain more resistant to epidemics if there is some variety in what they’re individually immune to. If everyone had exactly the same immunity profile, the group would be highly resistant to some bugs and completely vulnerable to others. It would be putting all your disease prevention eggs in one basket.

But in another example of extreme genetic practicality, how similar we smell to our friends can be determined genetically.  Think about it. Would you rather be close to people who generally smell the same, or those that smell different? It seems a little silly in today’s world of private homes and extreme hygiene, but when you’re sharing very close living quarters with others and there’s no such thing as showers and baths, how everyone smells becomes extremely important.

Christakis and Fowler found that our olfactory sensibilities tend to trend to the homophilic side between friends. In other words, the people we like smell alike. And this is important because of something called olfactory fatigue. We use smell as a difference detector. It warns us when something is not right. And our nose starts to ignore smells it gets used to, even offensive ones. It’s why you can’t smell your own typical body odor. Or, in another even less elegant example, it’s why your farts don’t stink as much as others. 

Given all this, it would make sense that if you had to spend time close to others, you would pick people who smelled like you. Your nose would automatically be less sensitive to their own smells. And that’s exactly what a new study from the Weizmann Institute of Science found. In the study, the scent signatures of complete strangers were sampled using an electronic sniffer called an eNose. Then the strangers were asked to engage in nonverbal social interactions in pairs. After, they were asked to rate each interaction based on how likely they would be to become friends with the person. The result? Based on their smells alone, the researchers were able to predict with 71% accuracy who would become friends.

The foundations of friendship run deep – down to the genetic building blocks that make us who we are. These foundations were built in a physical world over millions of years. They engage senses that evolved to help us experience that physical world. Those foundations are not going to disappear in the next decade or two, no matter how addictive Facebook or TikTok becomes. We can continue to layer technology over these foundations, but to deny them it to ignore human nature.