The Cost of Not Being Curious

The world is having a pandemic-proportioned wave of Ostrichitis.

Now, maybe you haven’t heard of Ostrichitis. But I’m willing to bet you’re showing at least some of the symptoms:

  • Avoiding newscasts, especially those that feature objective and unbiased reporting
  • Quickly scrolling past any online news items in your feed that look like they may be uncomfortable to read
  • Dismissing out of hand information coming from unfamiliar sources

These are the signs of Ostrichitis – or the Ostrich Effect – and I have all of them. This is actually a psychological effect, more pointedly called willful ignorance, which I wrote about a few years ago. And from where I’m observing the world, we all seem to have it to one extent or another.

I don’t think this avoidance of information comes as a shock to anyone. The world is a crappy place right now. And we all seem to have gained comfort from adopting the folk wisdom that “no news is good news.” Processing bad news is hard work, and we just don’t have the cognitive resources to crunch through endless cycles of catastrophic news. If the bad news affirms our existing beliefs, it makes us even madder than what we were. If it runs counter to our beliefs, it forces us to spin up our sensemaking mechanisms and reframe our view of reality. Either way, there are way more fun things to do.

A recent study from the University of Chicago attempted to pinpoint when children started avoid bad news. The research team found that while young children don’t tend to put boundaries around their curiosity, as they age they start avoiding information that challenges their beliefs or their own well-being. The threshold seems to be about 6 years old. Before that, children are actively seeking information of all kinds (as any parent barraged by never ending “Whys” can tell you). After that, chidren start strategizing the types of information they pay attention to.

Now, like everything about humans, curiosity tends to be an individual thing. Some of us are highly curious and some of us avoid seeking new information religiously. But even if we are a curious sort, we may pick and choose what we’re curious about. We may find “safe zones” where we let our curiosity out to play. If things look too menacing, we may protect ourselves by curbing our curiosity.

The unfortunate part of this is that curiosity, in all its forms, is almost always a good thing for humans (even if it can prove fatal to cats).

The more curious we are, the better tied we are to reality. The lens we use to parse the world is something called a sense-making loop. I’ve often referred to this in the past. It’s a processing loop that compares what we experience with what we believe, referred to as our “frame”. For the curious, this frame is often updated to match what we experience. For the incurious, the frame is held on to stubbornly, often by ignoring new information or bending information to conform to their beliefs. A curious brain is a brain primed to grow and adapt. An incurious brain is one that is stagnant and inflexible. That’s why the father of modern-day psychology, William James, called curiosity “the impulse towards better cognition.”

When we think about the world we want, curiosity is a key factor in defining it. Curiosity keeps us moving forward. The lack of curiosity locks us in place or even pushes us backwards, causing the world to regress to a more savage and brutal place. Writers of dystopian fiction knew this. That’s why authors including H.G. Wells, Aldous Huxley, Ray Bradbury and George Orwell all made a lack of curiosity a key part of their bleak future worlds. Our current lack of curiosity is driving our world in the same dangerous direction.

For all these reasons, it’s essential that we stay curious, even if it’s becoming increasingly uncomfortable.

Being in the Room Where It Happens

I spent the past weekend attending a conference that I had helped to plan. As is now often the case, this was a hybrid conference; you could choose to attend in person or online via Zoom. Although it involved a long plane ride, I choose to attend in person. It could be because – as a planner – I wanted to see how the event played out. Also, it’s been a long time since I attended a conference away from my home. Or – maybe – it was just FOMO.

Whatever the reason, I’m glad I was there, in the room.

This was a very small conference planned on a shoestring budget. We didn’t have money for extensive IT support or AV equipment. We were dependent solely on a laptop and whatever sound equipment our host was able to supply. We knew going into the conference that this would make for a less-than-ideal experience for those attending virtually. But – even accounting for that – I found there was a huge gap in the quality of that experience between those that were there and those that were attending online. And, over the duration of the 3-day conference, I observed why that might be so.

This conference was a 50/50 mix of those that already knew each other and those that were meeting each other for the first time. Even those who were familiar with each other tended to connect more often via a virtual meeting platform than in a physical meeting space. I know that despite the convenience and efficiency of being able to meet online, something is lost in the process. After the past two days, carefully observing what was happening in the room we were all in, I have a better understanding of what that loss might be – it was the vague and inexact art of creating a real bond with another person.

In that room, the bonding didn’t happen at the speaking podium and very seldom happened during the sessions we so carefully planned. It seeped in on the sidelines, over warmed-over coffee from conference centre urns, overripe bananas and the detritus of the picked over pastry tray. The bonding came from all of us sharing and digesting a common experience. You could feel a palpable energy in the room. You could pick up the emotion, read the body language and tune in to the full bandwidth of communication that goes far beyond what could be transmitted between an onboard microphone and a webcam.

But it wasn’t just the sharing of the experience that created the bonds. It was the digesting of those experiences after the fact. We humans are herding animals, and that extends to how we come to consensus about things we go through together. We do so through communication with others – not just with words and gesture, but also through the full bandwidth of our evolved mechanisms for coming to a collective understanding. It wasn’t just that a camera and microphone couldn’t transmit that effectively, it was that it happened where there was no camera or mic.

As researchers have discovered, there is a lived reality and a remembered reality and often, they don’t look very much alike. The difference between the effectiveness of an in-person experience and one accessed through an online platform shouldn’t come as a surprise to us. This is due to how our evolved sense-making mechanisms operate. We make sense of reality both internally, through a comparison with our existing cognitive models and externally, through interacting with others around us who have shared that same reality. This communal give-and-take colors what we take with us, in the form of both memories and an updated model of what we know and believe. When it comes to how humans are built, collective sense making is a feature, not a bug.

I came away from that conference with much more than the content that was shared at the speaker dais. I also came away with a handful of new relationships, built on sharing an experience and, through that, laying down the first foundations of trust and familiarity. I would not hesitate to reach out to any of these new friends if I had a question about something or a project I felt they could collaborate on.

I think that’s true largely because I was in the room where it happened.

When Did the Future Become So Scary?

The TWA hotel at JFK airport in New York gives one an acute case of temporal dissonance. It’s a step backwards in time to the “Golden Age of Travel” – the 1960s. But even though you’re transported back 60 years, it seems like you’re looking into the future. The original space – the TWA Flight Center – was designed in 1962 by Eero Saarinen. This was a time when America was in love with the idea of the future. Science and technology were going to be our saving grace. The future was going to be a utopian place filled with flying jet cars, benign robots and gleaming, sexy white curves everywhere.  The TWA Flight Center was dedicated to that future.

It was part of our love affair with science and technology during the 60s. Corporate America was falling over itself to bring the space-age fueled future to life as soon as possible. Disney first envisioned the community of tomorrow that would become Epcot. Global Expos had pavilions dedicated to what the future would bring. There were four World Fairs over 12 years, from 1958 to 1970, each celebrating a bright, shiny white future. There wouldn’t be another for 22 years.

This fascination with the future was mirrored in our entertainment. Star Trek (pilot in 1964, series start in 1966) invited all of us to boldly go where no man had gone before, namely a future set roughly three centuries from then.   For those of us of a younger age, the Jetsons (original series from 1963 to 64) indoctrinated an entire generation into this religion of future worship. Yes, tomorrow would be wonderful – just you wait and see!

That was then – this is now. And now is a helluva lot different.

Almost no one – especially in the entertainment industry – is envisioning the future as anything else than an apocalyptic hell hole. We’ve done an about face and are grasping desperately for the past. The future went from being utopian to dystopian, seemingly in the blink of an eye. What happened?

It’s hard to nail down exactly when we went from eagerly awaiting the future to dreading it, but it appears to be sometime during the last two decades of the 20th Century. By the time the clock ticked over to the next millennium, our love affair was over. As Chuck Palahniuk, author of the 1999 novel Invisible Monsters, quipped, “When did the future go from being a promise to a threat?”

Our dread about the future might just be a fear of change. As the future we imagined in the 1960’s started playing out in real time, perhaps we realized our vision was a little too simplistic. The future came with unintended consequences, including massive societal shifts. It’s like we collectively told ourselves, “Once burned, twice shy.” Maybe it was the uncertainty of the future that scared the bejeezus out of us.

But it could also be how we got our information about the impact of science and technology on our lives. I don’t think it’s a coincidence that our fear of the future coincided with the decline of journalism. Sensationalism and endless punditry replaced real reporting just about the time we started this about face. When negative things happened, they were amplified. Fear was the natural result. We felt out of control and we keep telling ourselves that things never used to be this way.  

The sum total of all this was the spread of a recognized psychological affliction called Anticipatory Anxiety – the certainty that the future is going to bring bad things down upon us. This went from being a localized phenomenon (“my job interview tomorrow is not going to go well”) to a widespread angst (“the world is going to hell in a handbasket”). Call it Existential Anticipatory Anxiety.

Futurists are – by nature – optimists. They believe things well be better tomorrow than they are today. In the Sixties, we all leaned into the future. The opposite of this is something called Rosy Retrospection, and it often comes bundled with Anticipatory Anxiety. It is a known cognitive bias that comes with a selective memory of the past, tossing out the bad and keeping only the good parts of yesterday. It makes us yearn to return to the past, when everything was better.

That’s where we are today. It explains the worldwide swing to the right. MAGA is really a 4-letter encapsulation of Rosy Retrospection – Make America Great Again! Whether you believe that or not, it’s a message that is very much in sync with our current feelings about the future and the past.

As writer and right-leaning political commentator William F. Buckley said, “A conservative is someone who stands athwart history, yelling Stop!”

It’s Tough to Consume Conscientiously

It’s getting harder to be both a good person and a wise consumer.

My parents never had this problem when I was a kid. My dad was a Ford man. Although he hasn’t driven for 10 years, he still is. If you grew up in the country, your choices were simple – you needed a pickup truck. And in the 1960s and 70s, there were only three choices: Ford, GMC or Dodge. For dad, the choice was Ford – always.

Back then, brand relationships were pretty simple. We benefited from the bliss of ignorance. Did the Ford Motor Company do horrible things during that time? Absolutely. As just one example, they made a cost-benefit calculation and decided to keep the Pinto on the road even though they knew it tended to blow up when hit from the rear. There is a corporate memo saying – in black and white – that it would be cheaper to settle the legal claims of those that died than to fix the problem. The company was charged for negligent homicide. It doesn’t get less ethical than that.

But that didn’t matter to Dad. He either didn’t know or didn’t care. The Pinto Problem, along with the rest of the shady stuff done by the Ford Motor Company, including bribes, kickbacks and improper use of corporate funds by Henry Ford II, was not part of Dad’s consumer decision process. He still bought Ford. And he still considered himself a good person. The two things had little to do with each other.

Things are harder now for consumers. We definitely have more choice, and those choices are harder, because we know more.  Even buying eggs becomes an ethical struggle. Do we save a few bucks, or do we make some chicken’s life a little less horrible?

Let me give you the latest example from my life. Next year, we are planning to take our grandchildren to a Disney theme park. If our family has a beloved brand, it would be Disney. The company has been part of my kids’ lives in one form or another since they were born and we all want it to be part of their kid’s lives as well.

Without getting into the whole debate, I personally have some moral conflicts with some of Disney’s recent corporate decisions. I’m not alone. A Facebook group for those planning a visit to this particular park has recently seen posts from those agonizing over the same issue. Does taking the family to the park make us complicit in Disney’s actions that we may not agree with? Do we care enough to pull the plug on a long-planned park visit?

This gets to the crux of the issue facing consumers now – how do we balance our beliefs about what is wrong and right with our desire to consume? Which do we care more about? The answer, as it turns out, seems to almost always be to click the buy button as we hold our noses.

One way to make that easier is to tell ourselves that one less visit to a Disney mark will make virtually no impact on the corporate bottom line. Depriving ourselves of a long-planned family experience will make no difference. And – individually – this is true. But it’s exactly this type of consumer apathy which, when aggregated, allows corporations to get away with being bad moral characters.

Even if we want to be more ethically deliberate in our consumer decisions, it’s hard to know where to draw the line. Where are we getting our information about corporate behavior from? Can it be trusted? Is this a case of one regrettable action, or is there a pattern of unethical conduct? These decisions are always complex, and coming to any decision that involves complexity is always tricky.

To go back to a simpler time, my grandmother had a saying that she applied liberally to any given situation, “What does all this have to do with the price of tea in China?” Maybe she knew what was coming.

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

The Credibility Crisis

We in the western world are getting used to playing fast and loose with the truth. There is so much that is false around us – in our politics, in our media, in our day-to-day conversations – that it’s just too exhausting to hold everything to a burden of truth. Even the skeptical amongst us no longer have the cognitive bandwidth to keep searching for credible proof.

This is by design. Somewhere in the past four decades, politicians and society’s power brokers have discovered that by pandering to beliefs rather than trading in facts, you can bend to the truth to your will. Those that seek power and influence have struck paydirt in falsehoods.

In a cover story last summer in the Atlantic, journalist Anne Applebaum explains the method in the madness: “This tactic—the so-called fire hose of falsehoods—ultimately produces not outrage but nihilism. Given so many explanations, how can you know what actually happened? What if you just can’t know? If you don’t know what happened, you’re not likely to join a great movement for democracy, or to listen when anyone speaks about positive political change. Instead, you are not going to participate in any politics at all.”

As Applebaum points out, we have become a society of nihilists. We are too tired to look for evidence of meaning. There is simply too much garbage to shovel through to find it. We are pummeled by wave after wave of misinformation, struggling to keep our heads above the rising waters by clinging to the life preserver of our own beliefs. In the process, we run the risk of those beliefs becoming further and further disconnected from reality, whatever that might be. The cogs of our sensemaking machinery have become clogged with crap.

This reverses a consistent societal trend towards the truth that has been happening for the past several centuries. Since the Enlightenment of the 18th century, we have held reason and science as the compass points of our True Norh. These twin ideals were buttressed by our institutions, including our media outlets. Their goal was to spread knowledge. It is no coincidence that journalism flourished during the Enlightenment. Freedom of the press was constitutionally enshrined to ensure they had the both the right and the obligation to speak the truth.

That was then. This is now. In the U.S. institutions, including media, universities and even museums, are being overtly threatened if they don’t participate in the wilful obfuscation of objectivity that is coming from the White House. NPR and PBS, two of the most reliable news sources according to the Ad Fontes media bias chart, have been defunded by the federal government. Social media feeds are awash with AI slop. In a sea of misinformation, the truth becomes impossible to find. And – for our own sanity – we have had to learn to stop caring about that.

But here’s the thing about the truth. It gives us an unarguable common ground. It is consistent and independent from individual belief and perspective. As longtime senator Daniel Patrick Moynihan famously said, “Everyone is entitled to his own opinion, but not to his own facts.” 

When you trade in falsehoods, the ground is consistently shifting below your feet. The story is constantly changing to match the current situation and the desired outcome. There are no bearings to navigate by. Everyone had their own compass, and they’re all pointing in different directions.

The path the world is currently going down is troubling in a number of ways, but perhaps the most troubling is that it simply isn’t sustainable. Sooner or later in this sea of deliberate chaos, credibility is going to be required to convince enough people to do something they may not want to do. And if you have consistently traded away your credibility by battling the truth, good luck getting anyone to believe you.

The Double-Edged Sword of a “Doer” Society

Ask anyone who comes from somewhere else to the United States what attracted them. The most common answer is “because anything is possible here.” The U.S. is a nation of “doers”. It has been that promise that has attracted wave after wave of immigration, made of those chafing at the restraints and restrictions of their homelands. The concept of getting things done was embodied in Robert F. Kennedy’s famous speech, “Some men see things as they are and ask why? I dream of things that never were and ask why not?” The U.S. – more than anywhere else in the world – is the place to make those dreams come true.

But that comes with some baggage. Doers are individualists by definition. They are driven by what they can accomplish, by making something from nothing. And with that becomes an obsessive focus on time. When we have so much that we can do, we constantly worry about losing time. Time becomes one of the few constraints in a highly individualistic society.

But the US is not just individualistic. There are other countries that score highly on individualistic traits, including Australia, the U.K., New Zealand and my own home, Canada. But the U.S. is different, in that It’s also vertically individualistic – it is a highly hierarchal society obsessed with personal achievement. And – in the U.S. – achievement is measured in dollars and cents. In a Freakonomics podcast episode, Gert Jan Hofstede, a professor of artificial sociality in the Netherlands, called out this difference: “When you look at cultures like New Zealand or Australia that are more horizontal in their individualism, if you try to stand out there, they call it the tall poppy syndrome. You’re going to be shut down.”

In the U.S., tall poppies are celebrated and given god-like status. The ultra rich are recognized as the ideal to be aspired to. And this creates a problem in a nation of doers. If wealth is the ultimate goal, anything that stands between us and that goal is an obstacle to be eliminated.

When Breaking the Rules becomes The Rule

“Move fast and break things” – Mark Zuckerberg

In most societies, equality and fairness are the guardrails of governance. It was the U.S. that enshrined these in their constitution. Making sure things are fair and equal requires the establishment of rules of law and the setting of social norms.  But in the U.S., the breaking of rules is celebrated if it’s required to get things done. From the same Freakonomics podcast, Michele Gelfand, a professor of Organizational Behavior at Standford, said, “In societies that are tighter, people are willing to call out rule violators. Here in the U.S., it’s actually a rule violation to call out people who are violating norms. “

There is an inherent understanding in the US that sometimes trade-offs are necessary to achieve great things. It’s perhaps telling that Meta CEO Mark Zuckerberg is fascinated by the Roman emperor Augustus, a person generally recognized by history as gaining his achievements by inflicting some significant societal costs, including the subjugation of conquered territories and a brutal and systematic elimination of any opponents. This is fully recognized and embraced by Zuckerberg, who has said of his historic hero, ““Basically, through a really harsh approach, he established 200 years of world peace. What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today …(but)…that didn’t come for free, and he had to do certain things”.

Slipping from Entrepreneurialism to Entitlement

A reverence for “doing” can develop a toxic side when it becomes embedded in a society. In many cases, entrepreneurialism and entitlement are two different sides of the same coin. In a culture where entrepreneurial success is celebrated and iconized by media, the focus of entrepreneurialism can often shift from trying to profitably solve a problem to simply just profiting. Chasing wealth becomes the singular focus of “doing”.  in a society that has always encouraged everyone to chase their dreams, no matter the cost, it can create an environment where the Tragedy of the Commons is repeated over and over again.

This creates a paradox – a society that celebrates extreme wealth without seeming to realize that the more that wealth is concentrated in the hands of the few, the less there is for everyone else. Simple math is not the language of dreams.

To return to Augustus for a moment, we should remember that he was the one responsible for dismantling an admittedly barely functioning republic and installing himself as the autocratic emperor by doing away with democracy, consolidating power in his own hands and gutting Rome’s constitution.

Face Time in the Real World is Important

For all the advances made in neuroscience, we still don’t fully understand how our brains respond to other people. What we do know is that it’s complex.

Join the Chorus

Recent studies, including this one from Rochester University, are showing that when we see someone we recognize, the brain responds with a chorus of neuronal activity. Neurons from different parts of the brain fire in unison, creating a congruent response that may simultaneously pull from memory, from emotion, from the rational regions of our prefrontal cortex and from other deep-seated areas of our brain. The firing of any one neuron may be relatively subtle, but together this chorus of neurons can create a powerful response to a person. This cognitive choir represents our total comprehension of an individual.

Non-Verbal Communication

“You’ll have your looks, your pretty face. – And don’t underestimate the importance of body language!” – Ursula, The Little Mermaid

Given that we respond to people with different parts of the brain, it makes sense that we use part of the brain we didn’t realize when communicating with someone else. In 1967, psychologist Albert Mehrabian attempted to pin this down with some actual numbers, publishing a paper in which he put forth what became known as Mehrabian’s Rule: 7% of communication is verbal, 38% is tone of voice and 55% is body language.

Like many oft-quoted rules, this one is typically mis-quoted. It’s not that words are not important when we communication something. Words convey the message. But it’s the non-verbal part that determines how we interpret the message – and whether we trust it or not.

Folk wisdom has told us, “Your mouth is telling me one thing, but your eyes are telling me another.” In this case, folk wisdom is right. We evolved to respond to another person with our whole bodies, with our brains playing the part of conductor. Maybe the numbers don’t exactly add up to Mehrabian’s neat and tidy ratio, but the importance of non-verbal communication is undeniable. We intuitively pick up incredibly subtle hints: a slight tremor in the voice, a bead of sweat on the forehead, a slight turn down of one corner of the mouth, perhaps a foot tapping or a finger trembling, a split-second darting of the eye. All this is subconsciously monitored, fed to the brain and orchestrated into a judgment about a person and what they’re trying to tell us. This is how we evolved to judge whether we should build trust or lose it.

Face to Face vs Face to Screen

Now, we get to the question you knew was coming, “What happens when we have to make these decisions about someone else through a screen rather than face to face?”

Given that we don’t fully understand how the brain responds to people yet, it’s hard to say how much of our ability to judge whether we should convey trust or withhold it is impaired by screen-to-screen communication. My guess is that the impairment is significant, probably well over 50%. It’s difficult to test this in a laboratory setting, given that it generally requires some type of neuroimaging, such as an fMRI scanner. In order to present a stimulus for the brain to respond to when the subject is strapped in, a screen is really the only option. But common sense tells me – given the sophisticated and orchestrated nature of our brain’s social responses – that a lot is lost in translation from a real-world encounter to a screen recording.

New Faces vs Old Ones

If we think of how our brains respond to faces, we realize that in today’s world, a lot of our social judgements are increasing made without face-to-face encounters. In a case where we know someone, we will pull forward a snapshot of our entire history with that person. The current communication is just another data point in a rich collection of interpersonal experience. One would think that would substantially increase our odds of making a valid judgement.

But what if we must make a judgement on someone we’ve never met before, and have only seen through a screen; be it a TikTok post, an Instagram Reel, a YouTube video or a Facebook Post? What if we have to decide whether to believe an influencer when making an important life decision? Are we willing to rely on a fraction of our brain’s capacity when deciding whether to place trust in someone we’ve never met?

Keep Those Cousins Close!

Demographic trends tend to play out on the timelines of multiple generations. Declining birth rates, increased life spans and widespread lifestyle changes can all have a dramatic impact on not only what our families look like, but also how we connect with them. And because families are the nucleus of our world, changes in families mean fundamental changes in us: who we are, what we believe and how we connect with our world.

I have previously written about one such trend – a surplus of grandparents. The ratio of grandparents to grandchildren has never been higher than it is right now, thanks to increased life expectancy and a declining birth rate. It’s closing in on 1:1, meaning for every child, there is one unique grandparent. As a grandparent, I have to believe this is a good thing.

But another demographic trend is playing out and this may not be as positive for our family structure. While the grandparent market is booming, our supply of cousins is dwindling. And – as I’ll explain shortly – cousins are a good thing for us to have.

But first, a little demographic math. In the U.S. in 1960, the average number of children per household was 3.62. This was a spike thanks to the post WWII Baby Boom, but it’s relevant because this generation and the one before were the ones that determined the current crop of cousins for people of my age.

My parents were born in the 1930s. If both of them had 3 siblings, as was the norm, that would give me 6 aunts or uncles, all having children during the Baby Boom. And each of them would have 3 to 4 kids. So that would potentially supply 24 first cousins for me.

Now, let’s skip ahead a generation. Since 1970, the average number of children per household in the U.S. has hovered between 1.5 and 2. If I had been born in 1995, that would mean I only had 2 aunts or uncles, one from my mother’s side and one from my father’s. And if they each had 2 children, that would drop my first cousin quota down to 4. That’s 20 less first cousins in just one generation!

But what does this lack of first cousins mean in real terms? Cousins play an interesting sociological and psychological role in our development. Thanks to evolution, we all have something called “kinship altruism.”  In the simplest of terms, we are hardwired to help those with which we share some DNA. Those evolved bonds are strongest with those with whom we share the most DNA. There is a hierarchy of kinship – topped by our parents and siblings.

But just one rung down the ladder are our first cousins. And those first cousins can play a critical role in how we get along with the world as we grow up. As journalist Faith Hill said, writing about this in The Atlantic, “Cousin connections can be lovely because they exist in that strange gray area between closeness and distance—because they don’t follow a strict playbook.”

As Hill said, cousins represent a unique middle ground. We have a lot in common with our cousins, but not too much. Our cousins can come from different upbringings, can span a wider range of ages than our siblings, can come from different socio-economic circumstances, can even live in different places. We may see them every day, or once every year or two. Yet, we are connected in an important way. Cousins play a critical role in helping us navigate relationships and learn to understand different perspectives. Having a lot of cousins is like having a big sandbox for our societal development.

If you overlay societal trends on this demographic trend towards fewer first cousins, the shift is even more noticeable. We are a lot more mobile now then our parents and grandparents were. Families used to generally live close to each other. Now they spread across the country. My wife, who is Italian, has almost 50 first cousins and almost all of them live in the same town. But that is rare. Most of us have a handful of cousins who we rarely see. We don’t have the advantage of growing up together. At a time when societal connection is more important than ever, I worry that this is one more instance of us losing the skills we need to get along with each other.

From my own experience, I have found that the relationship between my cousins is vital in negotiating the stewardship of our families as it’s handed off from our parent’s generation to our own. I personally have become closer to many cousins as – one by one – our parents are taken from us.  Through our cousins – we relive cherished memories and regain that common ground of shared experience and ancestry.

Paging Dr. Robot

When it comes to the benefits of A.I. one of the most intriguing opportunities is in healthcare. Microsoft’s recent announcement that, given a diagnostic challenge where their Microsoft AI Diagnostic Orchestrator (MAI-DxO) went head to head with 21 general-practice practitioners, the A.I. system correctly diagnosed 85% of 300 challenging cases gathered from the New England Journal of Medicine. The human doctors only managed to get 20% of the diagnoses correct.

This is of particular interest to me, because Canada has a health care problem. In a recent comparison of international health policies conducted by the Commonwealth Fund, Canada came in last amongst 9 countries, most of which also have universal health care, on most key measures of timely access.

This is a big problem, but it’s not an unsolvable one. This does not qualify as a “wicked” problem, which I’ve talked about before. Wicked problems have no clear solution. I believe our healthcare problems can be solved, and A.I. could play a huge role in the solution.

The Canadian Medical Association outlined both the problems facing our healthcare system and some potential solutions. The overarching narrative is one of a system stretched beyond its resources and patients unable to access care in a timely manner. Human resources are burnt out and demotivated. Our back-end health record systems are siloed and inconsistent. An aging population, health misinformation, political beliefs and climate change are creating more demand for health services just as the supply of those services are being depleted.

Here’s one personal example of the gaps in our own health records. I recently had to go to my family doctor for a physical that is required to maintain my commercial driver’s license. I was delegated to a student doctor, given that it was a very routine check-up. Because I was seeing the doctor anyway, I thought it a good time to ask for a regular blood panel test because it had been a while since I had had one. Being a male of a certain age, I also asked for a Prostate-Specific Antigen test (PSA) and was told that it isn’t recommended as a screening test in my province anymore.

I was taken aback. I had been diagnosed with prostate cancer a decade earlier and had been successfully treated for it. It was a PSA test that led to an early diagnosis. I mentioned this to the doctor, who was sitting behind a computer screen with my records in front of him. He looked back at the screen and said, “Oh, you had prostate cancer? I didn’t know that. Sure, I’ll add a PSA to the requisition.”

I wish I could say that’s an isolated incident, but it’s not. These gaps is our medical history records happen all the time here in my part of Canada. And they can all be solved. It’s the aggregation and analysis of data beyond the limits of humans to handle that A.I. excels at. Yet our healthcare system continues to overwork exhausted healthcare providers and keep our personal health data hostage in siloed data centers because of systemic resistance to technology. I know there are concerns, but surely these concerns can be addressed.

I write this from a Canadian perspective, but I know these problems – and others – exist in the U.S. as well.  If A.I. can do certain jobs four times better than a human, it’s time to accept that and build it into our healthcare system. The answer to Canada’s healthcare problems may not be easy, but they are doable: integrate our existing health records, open the door to incorporation of personal biometric data from new wearable devices, use A.I. to analyze all this, and use humans where they can do things A.I. and technology can’t.

We need to start opening our mind to new solutions, because when it comes to a broken healthcare system, it’s literally a matter of life and death.