The Double-Edged Sword of a “Doer” Society

Ask anyone who comes from somewhere else to the United States what attracted them. The most common answer is “because anything is possible here.” The U.S. is a nation of “doers”. It has been that promise that has attracted wave after wave of immigration, made of those chafing at the restraints and restrictions of their homelands. The concept of getting things done was embodied in Robert F. Kennedy’s famous speech, “Some men see things as they are and ask why? I dream of things that never were and ask why not?” The U.S. – more than anywhere else in the world – is the place to make those dreams come true.

But that comes with some baggage. Doers are individualists by definition. They are driven by what they can accomplish, by making something from nothing. And with that becomes an obsessive focus on time. When we have so much that we can do, we constantly worry about losing time. Time becomes one of the few constraints in a highly individualistic society.

But the US is not just individualistic. There are other countries that score highly on individualistic traits, including Australia, the U.K., New Zealand and my own home, Canada. But the U.S. is different, in that It’s also vertically individualistic – it is a highly hierarchal society obsessed with personal achievement. And – in the U.S. – achievement is measured in dollars and cents. In a Freakonomics podcast episode, Gert Jan Hofstede, a professor of artificial sociality in the Netherlands, called out this difference: “When you look at cultures like New Zealand or Australia that are more horizontal in their individualism, if you try to stand out there, they call it the tall poppy syndrome. You’re going to be shut down.”

In the U.S., tall poppies are celebrated and given god-like status. The ultra rich are recognized as the ideal to be aspired to. And this creates a problem in a nation of doers. If wealth is the ultimate goal, anything that stands between us and that goal is an obstacle to be eliminated.

When Breaking the Rules becomes The Rule

“Move fast and break things” – Mark Zuckerberg

In most societies, equality and fairness are the guardrails of governance. It was the U.S. that enshrined these in their constitution. Making sure things are fair and equal requires the establishment of rules of law and the setting of social norms.  But in the U.S., the breaking of rules is celebrated if it’s required to get things done. From the same Freakonomics podcast, Michele Gelfand, a professor of Organizational Behavior at Standford, said, “In societies that are tighter, people are willing to call out rule violators. Here in the U.S., it’s actually a rule violation to call out people who are violating norms. “

There is an inherent understanding in the US that sometimes trade-offs are necessary to achieve great things. It’s perhaps telling that Meta CEO Mark Zuckerberg is fascinated by the Roman emperor Augustus, a person generally recognized by history as gaining his achievements by inflicting some significant societal costs, including the subjugation of conquered territories and a brutal and systematic elimination of any opponents. This is fully recognized and embraced by Zuckerberg, who has said of his historic hero, ““Basically, through a really harsh approach, he established 200 years of world peace. What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today …(but)…that didn’t come for free, and he had to do certain things”.

Slipping from Entrepreneurialism to Entitlement

A reverence for “doing” can develop a toxic side when it becomes embedded in a society. In many cases, entrepreneurialism and entitlement are two different sides of the same coin. In a culture where entrepreneurial success is celebrated and iconized by media, the focus of entrepreneurialism can often shift from trying to profitably solve a problem to simply just profiting. Chasing wealth becomes the singular focus of “doing”.  in a society that has always encouraged everyone to chase their dreams, no matter the cost, it can create an environment where the Tragedy of the Commons is repeated over and over again.

This creates a paradox – a society that celebrates extreme wealth without seeming to realize that the more that wealth is concentrated in the hands of the few, the less there is for everyone else. Simple math is not the language of dreams.

To return to Augustus for a moment, we should remember that he was the one responsible for dismantling an admittedly barely functioning republic and installing himself as the autocratic emperor by doing away with democracy, consolidating power in his own hands and gutting Rome’s constitution.

Face Time in the Real World is Important

For all the advances made in neuroscience, we still don’t fully understand how our brains respond to other people. What we do know is that it’s complex.

Join the Chorus

Recent studies, including this one from Rochester University, are showing that when we see someone we recognize, the brain responds with a chorus of neuronal activity. Neurons from different parts of the brain fire in unison, creating a congruent response that may simultaneously pull from memory, from emotion, from the rational regions of our prefrontal cortex and from other deep-seated areas of our brain. The firing of any one neuron may be relatively subtle, but together this chorus of neurons can create a powerful response to a person. This cognitive choir represents our total comprehension of an individual.

Non-Verbal Communication

“You’ll have your looks, your pretty face. – And don’t underestimate the importance of body language!” – Ursula, The Little Mermaid

Given that we respond to people with different parts of the brain, it makes sense that we use part of the brain we didn’t realize when communicating with someone else. In 1967, psychologist Albert Mehrabian attempted to pin this down with some actual numbers, publishing a paper in which he put forth what became known as Mehrabian’s Rule: 7% of communication is verbal, 38% is tone of voice and 55% is body language.

Like many oft-quoted rules, this one is typically mis-quoted. It’s not that words are not important when we communication something. Words convey the message. But it’s the non-verbal part that determines how we interpret the message – and whether we trust it or not.

Folk wisdom has told us, “Your mouth is telling me one thing, but your eyes are telling me another.” In this case, folk wisdom is right. We evolved to respond to another person with our whole bodies, with our brains playing the part of conductor. Maybe the numbers don’t exactly add up to Mehrabian’s neat and tidy ratio, but the importance of non-verbal communication is undeniable. We intuitively pick up incredibly subtle hints: a slight tremor in the voice, a bead of sweat on the forehead, a slight turn down of one corner of the mouth, perhaps a foot tapping or a finger trembling, a split-second darting of the eye. All this is subconsciously monitored, fed to the brain and orchestrated into a judgment about a person and what they’re trying to tell us. This is how we evolved to judge whether we should build trust or lose it.

Face to Face vs Face to Screen

Now, we get to the question you knew was coming, “What happens when we have to make these decisions about someone else through a screen rather than face to face?”

Given that we don’t fully understand how the brain responds to people yet, it’s hard to say how much of our ability to judge whether we should convey trust or withhold it is impaired by screen-to-screen communication. My guess is that the impairment is significant, probably well over 50%. It’s difficult to test this in a laboratory setting, given that it generally requires some type of neuroimaging, such as an fMRI scanner. In order to present a stimulus for the brain to respond to when the subject is strapped in, a screen is really the only option. But common sense tells me – given the sophisticated and orchestrated nature of our brain’s social responses – that a lot is lost in translation from a real-world encounter to a screen recording.

New Faces vs Old Ones

If we think of how our brains respond to faces, we realize that in today’s world, a lot of our social judgements are increasing made without face-to-face encounters. In a case where we know someone, we will pull forward a snapshot of our entire history with that person. The current communication is just another data point in a rich collection of interpersonal experience. One would think that would substantially increase our odds of making a valid judgement.

But what if we must make a judgement on someone we’ve never met before, and have only seen through a screen; be it a TikTok post, an Instagram Reel, a YouTube video or a Facebook Post? What if we have to decide whether to believe an influencer when making an important life decision? Are we willing to rely on a fraction of our brain’s capacity when deciding whether to place trust in someone we’ve never met?

Keep Those Cousins Close!

Demographic trends tend to play out on the timelines of multiple generations. Declining birth rates, increased life spans and widespread lifestyle changes can all have a dramatic impact on not only what our families look like, but also how we connect with them. And because families are the nucleus of our world, changes in families mean fundamental changes in us: who we are, what we believe and how we connect with our world.

I have previously written about one such trend – a surplus of grandparents. The ratio of grandparents to grandchildren has never been higher than it is right now, thanks to increased life expectancy and a declining birth rate. It’s closing in on 1:1, meaning for every child, there is one unique grandparent. As a grandparent, I have to believe this is a good thing.

But another demographic trend is playing out and this may not be as positive for our family structure. While the grandparent market is booming, our supply of cousins is dwindling. And – as I’ll explain shortly – cousins are a good thing for us to have.

But first, a little demographic math. In the U.S. in 1960, the average number of children per household was 3.62. This was a spike thanks to the post WWII Baby Boom, but it’s relevant because this generation and the one before were the ones that determined the current crop of cousins for people of my age.

My parents were born in the 1930s. If both of them had 3 siblings, as was the norm, that would give me 6 aunts or uncles, all having children during the Baby Boom. And each of them would have 3 to 4 kids. So that would potentially supply 24 first cousins for me.

Now, let’s skip ahead a generation. Since 1970, the average number of children per household in the U.S. has hovered between 1.5 and 2. If I had been born in 1995, that would mean I only had 2 aunts or uncles, one from my mother’s side and one from my father’s. And if they each had 2 children, that would drop my first cousin quota down to 4. That’s 20 less first cousins in just one generation!

But what does this lack of first cousins mean in real terms? Cousins play an interesting sociological and psychological role in our development. Thanks to evolution, we all have something called “kinship altruism.”  In the simplest of terms, we are hardwired to help those with which we share some DNA. Those evolved bonds are strongest with those with whom we share the most DNA. There is a hierarchy of kinship – topped by our parents and siblings.

But just one rung down the ladder are our first cousins. And those first cousins can play a critical role in how we get along with the world as we grow up. As journalist Faith Hill said, writing about this in The Atlantic, “Cousin connections can be lovely because they exist in that strange gray area between closeness and distance—because they don’t follow a strict playbook.”

As Hill said, cousins represent a unique middle ground. We have a lot in common with our cousins, but not too much. Our cousins can come from different upbringings, can span a wider range of ages than our siblings, can come from different socio-economic circumstances, can even live in different places. We may see them every day, or once every year or two. Yet, we are connected in an important way. Cousins play a critical role in helping us navigate relationships and learn to understand different perspectives. Having a lot of cousins is like having a big sandbox for our societal development.

If you overlay societal trends on this demographic trend towards fewer first cousins, the shift is even more noticeable. We are a lot more mobile now then our parents and grandparents were. Families used to generally live close to each other. Now they spread across the country. My wife, who is Italian, has almost 50 first cousins and almost all of them live in the same town. But that is rare. Most of us have a handful of cousins who we rarely see. We don’t have the advantage of growing up together. At a time when societal connection is more important than ever, I worry that this is one more instance of us losing the skills we need to get along with each other.

From my own experience, I have found that the relationship between my cousins is vital in negotiating the stewardship of our families as it’s handed off from our parent’s generation to our own. I personally have become closer to many cousins as – one by one – our parents are taken from us.  Through our cousins – we relive cherished memories and regain that common ground of shared experience and ancestry.

Bread and Circuses: A Return to the Roman Empire?

Reality sucks. Seriously. I don’t know about you, but increasingly, I’m avoiding the news because I’m having a lot of trouble processing what’s happening in the world. So when I look to escape, I often turn to entertainment. And I don’t have to turn very far. Never has entertainment been more accessible to us. We carry entertainment in our pocket. A 24-hour smorgasbord of entertainment media is never more than a click away. That should give us pause, because there is a very blurred line between simply seeking entertainment to unwind and becoming addicted to it.

Some years ago I did an extensive series of posts on the Psychology of Entertainment. Recently, a podcast producer from Seattle ran across the series when he was producing a podcast on the same topic and reached out to me for an interview. We talked at length about the ubiquitous nature of entertainment and the role it plays in our society. In the interview, I said, “Entertainment is now the window we see ourselves through. It’s how we define ourselves.”

That got me to thinking. If we define ourselves through entertainment, what does that do to our view of the world? In my own research for this column, I ran across another post on how we can become addicted to entertainment. And we do so because reality stresses us out, “Addictive behavior, especially when not to a substance, is usually triggered by emotional stress. We get lonely, angry, frustrated, weary. We feel ‘weighed down’, helpless, and weak.”

Check. That’s me. All I want to do is escape reality. The post goes on to say, “Escapism only becomes a problem when we begin to replace reality with whatever we’re escaping to.”

I believe we’re at that point. We are cutting ties to reality and replacing them with a manufactured reality coming from the entertainment industry. In 1985 – forty years ago – author and educator Neil Postman warned us in his book Amusing Ourselves to Death that we were heading in this direction. The calendar had just ticked past the year 1984 and the world collectively sighed in relief that George Orwell’s eponymous vision from his novel hadn’t materialized. Postman warned that it wasn’t Orwell’s future we should be worried about. It was Aldous Huxley’s forecast in Brave New World that seemed to be materializing:

“As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions…  Orwell feared that what we fear will ruin us. Huxley feared that what we desire will ruin us.”

Postman was worried then – 40 years ago – that the news was more entertainment than information. Today, we long for even the kind of journalism that Postman was already warning us about. He would be aghast to see what passes for news now. 

While things unknown to Postman (social media, fake news, even the internet) are throwing a new wrinkle in our downslide into an entertainment induced coma, it’s not exactly new.   This has happened at least once before in history, but you have to go back almost 2000 years to find an example. Near the end of the Western Roman Empire, as it was slipping into decline, the Roman poet Juvenal used a phrase that summed it up – panem et circenses – “bread and circuses”:

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously hopes for just two things: bread and circuses.”

Juvenal was referring to the strategy of the Roman emperors to provide free wheat and circus games and other entertainment games to gain political power. In an academic article from 2000, historian Paul Erdkamp said the ploy was a “”briberous and corrupting attempt of the Roman emperors to cover up the fact that they were selfish and incompetent tyrants.”

Perhaps history is repeating itself.

One thing we touched on in the podcast was a noticeable change in the entertainment industry itself. Scarlett Johansenn noticed the 2025 Academy Awards ceremony was a much more muted affair than in years past. There was hardly any political messaging or sermons about how entertainment provided a beacon of hope and justice. In an interview with Vanity Fair  – Johanssen mused that perhaps it’s because almost all the major studies are now owned by Big-Tech Billionaires, “These are people that are funding studios. It’s all these big tech guys that are funding our industry, and funding the Oscars, and so there you go. I guess we’re being muzzled in all these different ways, because the truth is that these big tech companies are completely enmeshed in all aspects of our lives.”

If we have willingly swapped entertainment for reality, and that entertainment is being produced by corporations who profit from addicting as many eyeballs as possible, prospects for the future do not look good.

We should be taking a lesson from what happened to Imperial Rome.

Paging Dr. Robot

When it comes to the benefits of A.I. one of the most intriguing opportunities is in healthcare. Microsoft’s recent announcement that, given a diagnostic challenge where their Microsoft AI Diagnostic Orchestrator (MAI-DxO) went head to head with 21 general-practice practitioners, the A.I. system correctly diagnosed 85% of 300 challenging cases gathered from the New England Journal of Medicine. The human doctors only managed to get 20% of the diagnoses correct.

This is of particular interest to me, because Canada has a health care problem. In a recent comparison of international health policies conducted by the Commonwealth Fund, Canada came in last amongst 9 countries, most of which also have universal health care, on most key measures of timely access.

This is a big problem, but it’s not an unsolvable one. This does not qualify as a “wicked” problem, which I’ve talked about before. Wicked problems have no clear solution. I believe our healthcare problems can be solved, and A.I. could play a huge role in the solution.

The Canadian Medical Association outlined both the problems facing our healthcare system and some potential solutions. The overarching narrative is one of a system stretched beyond its resources and patients unable to access care in a timely manner. Human resources are burnt out and demotivated. Our back-end health record systems are siloed and inconsistent. An aging population, health misinformation, political beliefs and climate change are creating more demand for health services just as the supply of those services are being depleted.

Here’s one personal example of the gaps in our own health records. I recently had to go to my family doctor for a physical that is required to maintain my commercial driver’s license. I was delegated to a student doctor, given that it was a very routine check-up. Because I was seeing the doctor anyway, I thought it a good time to ask for a regular blood panel test because it had been a while since I had had one. Being a male of a certain age, I also asked for a Prostate-Specific Antigen test (PSA) and was told that it isn’t recommended as a screening test in my province anymore.

I was taken aback. I had been diagnosed with prostate cancer a decade earlier and had been successfully treated for it. It was a PSA test that led to an early diagnosis. I mentioned this to the doctor, who was sitting behind a computer screen with my records in front of him. He looked back at the screen and said, “Oh, you had prostate cancer? I didn’t know that. Sure, I’ll add a PSA to the requisition.”

I wish I could say that’s an isolated incident, but it’s not. These gaps is our medical history records happen all the time here in my part of Canada. And they can all be solved. It’s the aggregation and analysis of data beyond the limits of humans to handle that A.I. excels at. Yet our healthcare system continues to overwork exhausted healthcare providers and keep our personal health data hostage in siloed data centers because of systemic resistance to technology. I know there are concerns, but surely these concerns can be addressed.

I write this from a Canadian perspective, but I know these problems – and others – exist in the U.S. as well.  If A.I. can do certain jobs four times better than a human, it’s time to accept that and build it into our healthcare system. The answer to Canada’s healthcare problems may not be easy, but they are doable: integrate our existing health records, open the door to incorporation of personal biometric data from new wearable devices, use A.I. to analyze all this, and use humans where they can do things A.I. and technology can’t.

We need to start opening our mind to new solutions, because when it comes to a broken healthcare system, it’s literally a matter of life and death.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

Media Modelling of Masculinity       

According to a study that was just released by the Movember Institute of Men’s Health, nearly two-thirds of 3000 young men surveyed in the US, the UK and Australia were regularly engaging with online masculinity influencers. They looked to them for inspiration on how to be fitter, more financially successful and how to increase the quantity and/or quality of their relationships.

Did they find what they were looking for?

It’s hard to say based on the survey results. While these young men said they found these influencers inspiring and were optimistic about their personal circumstances and the future social circumstances of men in general, they said some troubling things about their own mental health. They were less willing to prioritize mental health and were more likely to engage in risky health behaviors such as steroid use or ignoring their own bodies and pushing themselves to exercise too hard. These mixed signals seemed to come from influencers telling them that a man who can’t control is emotions is weak and is not a real man.

Also, not all the harm inflicted by these influencers was felt just by the men in their audience. Those in the study who followed influencers were more likely to report negative and limiting attitudes towards women and what they bring to a relationship. They felt that often women were being rude to them and that they didn’t have the same dating values as men.

Finally, men who followed influencers were almost twice as likely to value traits in their male friends such as ambition, popularity and wealth. They were less likely to look for trustworthiness or kindness in their male friends.

This brings us to a question. Why do young men need influencers to tell them how to be a better man? For that matter, why do any of us, regardless of age or sex, need someone to influence us? Especially if it’s someone who’s only qualification to dispense advice is that they happen to have a TikTok account with a million followers.

This is another unfortunate effect of social media. We have evolved to look for role models because to do so gives us a step up. Again, this made sense in our evolutionary past but may not do so today.

When we all belonged to a social group that was geographically bound together, it was advantageous to look at the most successful members of that group and emulate them. When we all competed in the same environment for the same resources, copying the ones that got the biggest share was a pretty efficient way to improve our own fortunes.

There was also a moral benefit to emulating a role model. Increasingly, as our fortunes relied more on creating better relationships with those outside our immediate group, things like trustworthiness became a behavior that we would do well to copy. Also, respect tended to accrue to the elderly. Our first role models were our parents and grandparents. In a community that depended on rules for survival, authority figures were another logical place to look for role models.

Let’s fast forward to today. Our decoupling with our evolutionarily determined, geographically limited idea of community has thrown several monkey wrenches into the delicate machinery of our society. Who we turn to as role models is just one example. As soon as we make the leap from rules based on physical proximity to the lure of mass influence, we inevitably run into problems.

Let’s go back to our masculinity influencers. These online influencers have one goal – to amass as many followers as possible. The economic reality of online influence is this: size of audience x depth of engagement = financial success. And how do you get a ton of followers? By telling them what they want to hear.

Let’s stare down some stark realities – well adjusted, mentally secure, emotionally mature, self-confident young males are less likely to desperately look for answers in online social media. There is no upside for influencers to go after this market. So they look elsewhere – primarily to young males who are none of the above things. And that audience doesn’t want to hear about emotional vulnerability or realistic appraisals of their dating opportunities. They want to hear that they can have it all – they can be real men. So the message (and the messenger) follows the audience, down a road that leads towards toxic masculinity.

Media provides a very distorted lens through which why might seek our new role models. We will still seek the familiar and the successful, but both those things are determined by what we see through media, rather than what we observe in real life. There is no proof that their advice or approach will pay off in the real world, but if they have a large following, they must be right.

Also, these are one-way “mentorships”. The influencers may know their audience in the aggregate, if only in terms of a market to be monetized, but they don’t know them individually. These are relationships without any reciprocity. There is no price that will be pad for passing on potentially harmful advice.

If there is damage done, it’s no big deal. It’s just one less follower.

Do We Have the Emotional Bandwidth to Stay Curious?

Curiosity is good for the brain. It’s like exercise for our minds. It stretches the prefrontal cortex and whips the higher parts of our brains into gear. Curiosity also nudges our memory making muscles into action and builds our brain’s capacity to handle uncertain situations.

But it’s hard work – mentally speaking. It takes effort to be curious, especially in situations where curiosity could figuratively “kill the cat.” The more dangerous our environment, the less curious we become.

A while back I talked about why the world no longer seems to make sense. Part of this is tied to our appetite for curiosity. Actively trying to make sense of the world puts us “out there”, leaving the safe space of our established beliefs behind. It is literally the definition of an “open mind” – a mind that has left itself open to being changed. And that’s a very uncomfortable place to be when things seem to be falling down around our ears.

Some of us are naturally more curious than others. Curious people typically achieve higher levels of education (learning and curiosity are two sides of the same coin). They are less likely to accept things at face value. They apply critical thinking to situations as a matter of course. Their brains are wired to be rewarded with a bigger dopamine hit when they learn something new.

Others rely more on what they believe to be true. They actively filter out information that may challenge those beliefs. They double down on what is known and defend themselves from the unknown. For them, curiosity is not an invitation, it’s a threat.

Part of this is a differing tolerance for something which neuroscientists call “prediction error” – the difference between what we think will happen and what actually does happen. Non-curious people perceive predictive gaps as threats and respond accordingly, looking for something or someone to blame. They believe that it can’t be a mistaken belief that is to blame, it must be something else that caused the error. Curious people look at prediction errors as continually running scientific experiments, given them a chance to discover the errors in their current mental models and update them based on new information.

Our appetite for curiosity has a huge impact on where we turn to be informed. The incurious will turn to information sources that won’t challenge their beliefs. These are people who get their news from either end of the political bias spectrum, either consistently liberal or consistently conservative. Given that, they can’t really be called information sources so much as opinion platforms. Curious people are more willing to be introduced to non-conforming information. In terms of media bias, you’ll find them consuming news from the middle of the pack.

Given the current state of the world, more curiosity is needed but is becoming harder to find. When humans (or any animal, really) are threatened, we become less curious. This is a feature, not a bug. A curious brain takes a lot longer to make a decision than a non-curious one. It is the difference between thinking “fast” and “slow” – in the words of psychologist and Nobel laureate Daniel Kahneman. But this feature evolved when threats to humans were usually immediate and potentially fatal. A slow brain is not of any benefit if you’re at risk of being torn apart by a pack of jackals. But today, our jackal encounters are usually of the metaphorical type, not the literal one. And that’s a threat of a very different kind.

Whatever the threat, our brain throttles back our appetite for curiosity. Even the habitually curious develop defense mechanisms in an environment of consistently bad news. We seek solace in the trivial and avoid the consequential. We start saving cognitive bandwidth from whatever impending doom we may be facing. We seek media that affirms our beliefs rather than challenges them.

This is unfortunate, because the threats we face today could use a little more curiosity.

Can Innovation Survive in Trump’s America?

If there was one thing that has sparked America’s success, it has been innovation. That has been the engine that has driven the U.S. forward for at least the last several decades. Yes, the U.S. has natural resources. Yes, at one time the U.S. led the world in manufacturing output. But in their pursuit of adding value to economic output to maximize profit, the U.S. has moved beyond resource extraction and manufacturing to the far-right end of the value chain, where the American economic engine relies heavily on innovation.

Donald Trump can talk all he wants about making America great again by bringing back manufacturing jobs that have migrated elsewhere in the world (a goal that many, including the Economic Policy Institute, feel is delusion, at least using Trump’s approach), but if innovation dies in the process, the U.S. loses. Game over. It’s innovation that now fuels the American Dream.

Given that, MAGA adherents should be careful what they wish for. The Great America they envision is a place where it may be impossible for that kind of innovation to survive.

World class innovation needs an ecosystem, where there is adequate funding for start-ups, a friendly regulatory framework, a robust research environment and an open-door policy for innovative immigrants from other countries – all of which the US has historically had in spades. And – theoretically at least – it’s an ecosystem that Trump is promising high tech and why the tech broligarchy has been quick to court him. But like so many things with Trump, the reality will fall far short of his promises. In fact, he will likely stop innovation in its tracks and send U.S. ingenuity reeling backwards.

Next to the regulatory and economic inputs required for innovation – and perhaps more important than both – the biggest requirement for innovation is an environment that fosters divergent thinking. Study after study has shown that innovation lives best in an environment that fosters collaboration, invites different perspectives and provides a safe space for experimentation. All those things can be found in exactly the opposite of the direction in which the U.S. is currently headed.

Each year, the World Intellectual Property Organization publishes their Global Innovation Index. In 2024, the U.S. was in third spot, behind Switzerland and Sweden. To understand how innovation flourishes, it’s worth looking at what the most innovative countries have in common. Of the top ten (the others are Singapore, the U.K., South Korea, Finland, Netherlands, Germany and Denmark), almost all score the highest marks from the Economist Democracy Index for the strength of their democracy. Singapore is still struggling towards full democracy, and the U.S. is now considered to be a “flawed democracy”, in real danger of becoming an authoritarian regime.

The European contenders also receive very high marks for their social values and enshrining personal rights and freedoms. Those are exactly the things currently being dismantled in America.

There is only one country which is defined as an authoritarian regime that made the top 25 of the Global Innovation Index. China sits in the 11th spot. This brings us to a good question, “Can innovation happen in an authoritarian regime?” The answer, I believe, is a qualified yes. But it’s innovation we may not recognize, and which may turn out to be a lot less attractive than we thought.

I happened to visit China right around the time that Google was trying to move into the huge Chinese Market. Their main competition was Baidu, the home-grown search engine. I was talking to a Google engineer about how they were competing with Baidu. He said it was almost impossible to match the speed at which they could roll out new features. The reason wasn’t that they were more innovative. It was because they innovated through brute force. They could throw hundreds of programmers at an issue and hard code it at the interface level, rather than take the Western approach of embedding core functionality in the base code in a more elegant and sustainable approach. The Chinese could afford to endlessly code and recode.

It’s Brute Force Innovation that you’ll find in authoritarian regimes and dictatorships. It’s what the Soviets used to compete in the space race. It’s what Nazi Germany used when they developed rocket science in a desperate bid to survive World War II. It is innovation dictated by the regime, innovating in prioritized areas by sheer force despite the fact that the typical underpinnings of innovation – creative freedom, divergent thinking, the security needed to experiment and fail – have been eliminated.

If you look at the playbook Trump seems to be following – akin to the one Victor Orbán used in Hungary (ranked 36th on the Global Innovation Index) or Putin’s Russia (ranked 59th) – there appears to be  little hope for the U.S. to retain its world dominance in innovation.

The Quaint Concept of Borders

According to a recent Leger poll, one in five Americans would like their state to secede and join Canada. In contrast, according to the same poll, only one in 10 Canadians would like to see Canada become the 51st State.

Of course, no one takes either suggestion very seriously, except perhaps the President of the United States. And, given the current state of things, that job title is a little ridiculous. Those States are probably less united than they have been at any time since the American Civil War.

All this talk about borders does make a good Facebook meme though. You might have seen it – under the title “Problem Solved” there’s a map of North American with the Canadian border redrawn to extend down the east and west coast to include Washington, Oregon, California, New York, New Jersey and The New England States. Minnesota also gets to become part of the Great White North.

But – even if we took the suggestion seriously – does redrawing borders really solve any problem? Let’s assume that Canada really did become part of the US. It would be a “big, beautiful state,” according to Donald Trump. There have been a few that have pointed out that that state, with our 40 million potential voters, would probably vote overwhelmingly against Trump. Again, according to Canadian pollster Leger, only about 12% of Canadians support Trump.

While we’re redrawing the map of the world, even oceans can’t get in the way. Here in Canada, we are rushing to realign with Europe and its markets. The idea has even been floated that Canada should join the European Union.  Our new prime minister, Mark Carney, has said we have more in common with Europe and the values found there than we do with our American neighbors to the south.

But again, we use the faulty logic of Canadians, Americans or Europeans being identified as a cohesive bloc defined by a border. The recent rush of patriotism aside, Canadians rarely speak with one voice. For example, support for Trump runs highest in Alberta, where 23% of the province’s voters support him. He’s least popular in Canada’s Atlantic provinces, where support dips to 8%

Or let’s hop across the border to the state closest to me – Washington. If you take the state in aggregate, it is a blue state by almost 20 points. But again, that designation depends on an aggregation of votes within a territory defined by a fairly arbitrary border. If you look at Washington on a county-by-county basis, it’s hardly a cohesive voting bloc. Yes, the urban centers of Seattle and Olympia went heavily for Kamala Harris (74% in King County) but eastern Washington is a very different story. There in many counties, for every voter that chose Harris, 3 chose Trump. Ideologically, a resident of Pend Orielle County, Washington has much more in common with someone from Bonner County, which lies just across the border in Idaho, than they do with someone from Jefferson County, which lies on the west coast of Washington.

My point is this: given the polarization of our society, it’s almost impossible to draw a line anywhere on a map and think that it defines the people within that line in any identifiable way. Right now, nowhere on earth defines this more starkly than the United States. Because of the borders of the U.S. and the political structures that determine who leads the people within those borders, almost 2/3rds of Americans lives are being determined by a man they didn’t vote for. In fact, a big percentage of those 2/3rds are vehemently opposed to their President and his policies. How does that make any sense?

Borders were necessary where our survival was tied to a specific location and the resources to be found within that location. This forced a commonality on those that lived within those boundaries. They ate the same food, drank the same water, tilled the same fields, worked at the same factory, shopped at the same stores, attended the same church and their children went to the same schools.

But our digital world has lost much of that commonality. Online, we are defined by how we think, not where we live. This creates a new definition of “tribe” and, by extension, tribal territories. The divides between us now are based on differences in beliefs, not geographical obstacles. And the gap between our beliefs is getting wider and wider. This leaves the concept of a border threatened as something that is becoming increasingly anachronistic. Borders define something that is becoming less and less real and more and more problematic as the people who live in a state or country find less and less in common with their fellow citizens.  As Scottish journalist James Crawford says in his book, The Edge of the Plain: How Borders Make and Break Our World, the tension is usually felt more acutely on those arbitrary borders: “Wherever there are borders … that’s where you are going to find the most concentrated injustice.”

This redefining of our world as it decouples from the concept of “place” will place more and more pressure on the old idea of a border defining a place and a common ideology.  When there is less cohesivity between those living within the border than there is between ideologically aligned factions spread across the globe, we must wonder how to manage this given our current political structures based on the foundation of a common territory. This is particularly true for democracies, where you get a whipsaw backlash between the right and left as the two factions grow further and further apart. That prognosis is not a good one. As Steven Levitsky and Daniel Ziblatt said in their book How Democracies Die, “Democracies rarely survive extreme partisanship.”