The Double-Edged Sword of a “Doer” Society

Ask anyone who comes from somewhere else to the United States what attracted them. The most common answer is “because anything is possible here.” The U.S. is a nation of “doers”. It has been that promise that has attracted wave after wave of immigration, made of those chafing at the restraints and restrictions of their homelands. The concept of getting things done was embodied in Robert F. Kennedy’s famous speech, “Some men see things as they are and ask why? I dream of things that never were and ask why not?” The U.S. – more than anywhere else in the world – is the place to make those dreams come true.

But that comes with some baggage. Doers are individualists by definition. They are driven by what they can accomplish, by making something from nothing. And with that becomes an obsessive focus on time. When we have so much that we can do, we constantly worry about losing time. Time becomes one of the few constraints in a highly individualistic society.

But the US is not just individualistic. There are other countries that score highly on individualistic traits, including Australia, the U.K., New Zealand and my own home, Canada. But the U.S. is different, in that It’s also vertically individualistic – it is a highly hierarchal society obsessed with personal achievement. And – in the U.S. – achievement is measured in dollars and cents. In a Freakonomics podcast episode, Gert Jan Hofstede, a professor of artificial sociality in the Netherlands, called out this difference: “When you look at cultures like New Zealand or Australia that are more horizontal in their individualism, if you try to stand out there, they call it the tall poppy syndrome. You’re going to be shut down.”

In the U.S., tall poppies are celebrated and given god-like status. The ultra rich are recognized as the ideal to be aspired to. And this creates a problem in a nation of doers. If wealth is the ultimate goal, anything that stands between us and that goal is an obstacle to be eliminated.

When Breaking the Rules becomes The Rule

“Move fast and break things” – Mark Zuckerberg

In most societies, equality and fairness are the guardrails of governance. It was the U.S. that enshrined these in their constitution. Making sure things are fair and equal requires the establishment of rules of law and the setting of social norms.  But in the U.S., the breaking of rules is celebrated if it’s required to get things done. From the same Freakonomics podcast, Michele Gelfand, a professor of Organizational Behavior at Standford, said, “In societies that are tighter, people are willing to call out rule violators. Here in the U.S., it’s actually a rule violation to call out people who are violating norms. “

There is an inherent understanding in the US that sometimes trade-offs are necessary to achieve great things. It’s perhaps telling that Meta CEO Mark Zuckerberg is fascinated by the Roman emperor Augustus, a person generally recognized by history as gaining his achievements by inflicting some significant societal costs, including the subjugation of conquered territories and a brutal and systematic elimination of any opponents. This is fully recognized and embraced by Zuckerberg, who has said of his historic hero, ““Basically, through a really harsh approach, he established 200 years of world peace. What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today …(but)…that didn’t come for free, and he had to do certain things”.

Slipping from Entrepreneurialism to Entitlement

A reverence for “doing” can develop a toxic side when it becomes embedded in a society. In many cases, entrepreneurialism and entitlement are two different sides of the same coin. In a culture where entrepreneurial success is celebrated and iconized by media, the focus of entrepreneurialism can often shift from trying to profitably solve a problem to simply just profiting. Chasing wealth becomes the singular focus of “doing”.  in a society that has always encouraged everyone to chase their dreams, no matter the cost, it can create an environment where the Tragedy of the Commons is repeated over and over again.

This creates a paradox – a society that celebrates extreme wealth without seeming to realize that the more that wealth is concentrated in the hands of the few, the less there is for everyone else. Simple math is not the language of dreams.

To return to Augustus for a moment, we should remember that he was the one responsible for dismantling an admittedly barely functioning republic and installing himself as the autocratic emperor by doing away with democracy, consolidating power in his own hands and gutting Rome’s constitution.

Bread and Circuses: A Return to the Roman Empire?

Reality sucks. Seriously. I don’t know about you, but increasingly, I’m avoiding the news because I’m having a lot of trouble processing what’s happening in the world. So when I look to escape, I often turn to entertainment. And I don’t have to turn very far. Never has entertainment been more accessible to us. We carry entertainment in our pocket. A 24-hour smorgasbord of entertainment media is never more than a click away. That should give us pause, because there is a very blurred line between simply seeking entertainment to unwind and becoming addicted to it.

Some years ago I did an extensive series of posts on the Psychology of Entertainment. Recently, a podcast producer from Seattle ran across the series when he was producing a podcast on the same topic and reached out to me for an interview. We talked at length about the ubiquitous nature of entertainment and the role it plays in our society. In the interview, I said, “Entertainment is now the window we see ourselves through. It’s how we define ourselves.”

That got me to thinking. If we define ourselves through entertainment, what does that do to our view of the world? In my own research for this column, I ran across another post on how we can become addicted to entertainment. And we do so because reality stresses us out, “Addictive behavior, especially when not to a substance, is usually triggered by emotional stress. We get lonely, angry, frustrated, weary. We feel ‘weighed down’, helpless, and weak.”

Check. That’s me. All I want to do is escape reality. The post goes on to say, “Escapism only becomes a problem when we begin to replace reality with whatever we’re escaping to.”

I believe we’re at that point. We are cutting ties to reality and replacing them with a manufactured reality coming from the entertainment industry. In 1985 – forty years ago – author and educator Neil Postman warned us in his book Amusing Ourselves to Death that we were heading in this direction. The calendar had just ticked past the year 1984 and the world collectively sighed in relief that George Orwell’s eponymous vision from his novel hadn’t materialized. Postman warned that it wasn’t Orwell’s future we should be worried about. It was Aldous Huxley’s forecast in Brave New World that seemed to be materializing:

“As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions…  Orwell feared that what we fear will ruin us. Huxley feared that what we desire will ruin us.”

Postman was worried then – 40 years ago – that the news was more entertainment than information. Today, we long for even the kind of journalism that Postman was already warning us about. He would be aghast to see what passes for news now. 

While things unknown to Postman (social media, fake news, even the internet) are throwing a new wrinkle in our downslide into an entertainment induced coma, it’s not exactly new.   This has happened at least once before in history, but you have to go back almost 2000 years to find an example. Near the end of the Western Roman Empire, as it was slipping into decline, the Roman poet Juvenal used a phrase that summed it up – panem et circenses – “bread and circuses”:

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously hopes for just two things: bread and circuses.”

Juvenal was referring to the strategy of the Roman emperors to provide free wheat and circus games and other entertainment games to gain political power. In an academic article from 2000, historian Paul Erdkamp said the ploy was a “”briberous and corrupting attempt of the Roman emperors to cover up the fact that they were selfish and incompetent tyrants.”

Perhaps history is repeating itself.

One thing we touched on in the podcast was a noticeable change in the entertainment industry itself. Scarlett Johansenn noticed the 2025 Academy Awards ceremony was a much more muted affair than in years past. There was hardly any political messaging or sermons about how entertainment provided a beacon of hope and justice. In an interview with Vanity Fair  – Johanssen mused that perhaps it’s because almost all the major studies are now owned by Big-Tech Billionaires, “These are people that are funding studios. It’s all these big tech guys that are funding our industry, and funding the Oscars, and so there you go. I guess we’re being muzzled in all these different ways, because the truth is that these big tech companies are completely enmeshed in all aspects of our lives.”

If we have willingly swapped entertainment for reality, and that entertainment is being produced by corporations who profit from addicting as many eyeballs as possible, prospects for the future do not look good.

We should be taking a lesson from what happened to Imperial Rome.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

Strategies for Surviving the News

When I started this post, I was going to unpack some of the psychology behind the consumption of the news. I soon realized that the topic is far beyond the confines of this post to realistically deal with. So I narrowed my focus to this – which has been very top of mind for me lately – how do you stay informed without becoming a trembling psychotic mess? How do you arm yourself for informed action rather than becoming paralyzed into inaction by the recent fire hose of sheer WTF insanity that makes up the average news feed.

Pick Your Battles

There are few things more debilitating to humans than fretting about things we can’t do anything about. Research has found a strong correlation between depression and our locus of control – the term psychologists use for the range of things we feel we can directly impact. There is actually a term for being so crushed by bad news that you lose the perspective needed to function in your own environment. It’s called Mean World Syndrome.

If effecting change is your goal, decide what is realistically within your scope of control. Then focus your information gathering on those specific things. When it comes to informing yourself to become a better change agent, going deep rather than wide might be a better strategy.

Be Deliberate about Your Information Gathering

The second strategy goes hand in hand with the first. Make sure you’re in the right frame of mind to gather information. There are two ways the brain processes information: top-down and bottom-up. Top-down processing is cognition with purpose – you have set an intent and you’re working to achieve specific goals. Bottom up is passively being exposed to random information and allowing your brain to be stimulated by it. The way you interpret the news will be greatly impacted by whether you’re processing it with a “top-down” intent or letting your brain parse it from the “bottom-up”

By being more deliberate in gathering information with a specific intent in mind, you completely change how your brain will process the news. It will instantly put it in a context related to your goal rather than let it rampage through our brains, triggering our primordial anxiety circuits.

Understand the Difference between Signal and Noise

Based on the top two strategies, you’ve probably already guessed that I’m not a big fan of relying on social media as an information source. And you’re right. A brain doom scrolling through a social media feed is not a brain primed to objectively process the news.

Here is what I did. For the broad context, I picked two international information sources I trust to be objective: The New York Times and the Economist out of the U.K. I subscribed to both because I wanted sources that weren’t totally reliant on advertising as a revenue source (a toxic disease which is killing true journalism). For Americans, I would highly recommend picking at least one source outside the US to counteract the polarized echo chamber that typifies US journalism, especially that which is completely ad supported.

Depending on your objectives, include sources that are relevant to those objectives. If local change is your goal, make sure you are informed about your community. With those bases in place, even If you get sucked down a doom scrolling rabbit hole, at least you’ll have a better context to allow you to separate signal from noise.

Put the Screen Down

I realize that the majority of people (about 54% of US Adults according to Pew Research) will simply ignore all of the above and continue to be informed through their Facebook or X feeds. I can’t really change that.

But for the few of you out there that are concerned about the direction the world seems to be spinning and want to filter and curate your information sources to effect some real change, these strategies may be helpful.

For my part, I’m going to try to be much more deliberate in how I find and consume the news.  I’m also going to be more disciplined about simply ignoring the news when I’m not actively looking for it. Taking a walk in the woods or interacting with a real person are two things I’m going to try to do more.

Getting Tired of Trying to tell the Truth

It’s not always easy writing these weekly posts. I try to deal with things of consequence, and usually I choose things that may be negative in nature. I also try to learn a little bit more about these topics by doing some research and approaching the post in a thoughtful way.  All of this means I have gone down several depressing rabbit holes in the course of writing these pieces over the years.

I have to tell you that, cumulatively, it takes a toll. Some weeks, it can only be described as downright depressing. And that’s just for myself, who only does these once a week. What if this were my full-time job? What if I were a journalist reporting on an ever more confounding world? How would I find the motivation to do my job every day?

The answer, at least according to a recent survey of 402 journalists by PR industry platform creator Muck Rake, is that I could well be considering a different job. Last year, 56% of those journalists considered quitting.

The reasons are many. I and others have repeatedly talked the dismal state of journalism in North America. The advertising-based economic model that supports true reporting is falling apart. Publishers have found that it’s more profitable to pander to prejudice and preconceived beliefs than it is to actually try to report the truth and hope to change people’s minds. When it comes to journalism, it appears that Colonel Nathan R. Jessup (from the movie A Few Good Men) may have been right. We can’t handle the truth. We prefer to spoon fed polarized punditry that aligns with our beliefs. When profitability is based on the number of eye-balls gained, you get a lot more of them at a far lower cost by peddling opinion rather than proof. This has led to round after round of mass layoffs, cutting newsroom staffing by double digit percentages.

This reality brings a crushing load of economic pressure down on journalists. According to the Muck Rake survey, most journalist are battling burnout due to working longer hours with fewer resources. But it’s not just the economic restraints that are taking their toll on journalists. A good part of the problem is the evolving nature of how news develops and propagates through our society.

There used to be such a thing as a 24-hour news cycle which was defined by a daily publication deadline, whether that was the printing of a newspaper or the broadcast of the nightly news. As tight as 24 hours was, it was downright leisurely compared to the split-second reality of today’s information environment. New stories develop, break and fade from significance in minutes now rather than days or weeks as was true in the past. And that means that a journalist that hopes to keep up always has to be on. There is no such thing as downtime or being “off the grid.” Even with new tools and platforms to help monitor and filter the tidal wave of signal vs. noise that is today’s information ecosystem, a journalist always has to be plugged in and logged on to do their job.

That is exhausting.

But perhaps the biggest reason why journalists are considering a career change is not the economic constraints nor the hours worked. It’s the nature of the job itself. No one choses to be a journalist because they want to get rich. It’s a career built on passion. Good journalists want to do something significant and make a difference. They do it because they value objectivity and truth and believe that by reporting it, they can raise the level of thought and discourse in our society. Given the apparent dumpster fire that seems to sum up the world today, can you blame them for becoming disillusioned with their chosen career?

All of this is tremendously sad. But even more than that, it is profoundly frightening. In a time when we need more reliably curated, reliably reported information about the state of affairs than ever before, those we have always trusted to provide this are running – in droves – towards the nearest exit.

My 1000th Post – and My 20 Year Journey

Note: This week marks the 1000th post I’ve written for MediaPost. For this blog, all of those posts are here, plus a number that I’ve written for other publications and exclusively for Out of My Gord. But the sentiments here apply to all those posts. If you’re wondering, I’ve written 1233 posts in total.

According to the MediaPost search tool, this is my 1000th post for this publication. There are a few duplicates in there, but I’m not going to quibble. No matter how you count them up, that’s a lot of posts.

My first post was written on August 19th, 2004. Back then I wrote exclusively for the emerging search industry. Google was only 6 years old.  They had just gone public, with investors hoping to cash in on this new thing called paid search. Social media was even greener. There was no Facebook. Something called Myspace had launched the year before.

In the 20 years I’ve written for MediaPost, I’ve bounced from masthead to masthead. My editorial bent evolved from being Search industry specific to eventually find my sweet spot, which I found at the intersection of human behavior and technology.

It’s been a long and usually interesting journey. When I started, I was the parent of two young children who I dragged along to industry events, using the summer search conference in San Jose as an opportunity to take a family camping vacation. I am now a grandfather, and I haven’t been to a digital conference for almost 10 years (the last being the conferences I used to host and program for the good folks here at MediaPost).

When I started writing these posts, I was both a humanist and a technophile. I believed that people were inherently good, and that technology would be the tool we would use to be better. The Internet was just starting to figure out how to make money, but it was still idealistic enough that people like me believed it would be mostly a good thing. Google still had the phrase “Don’t be Evil” as part of its code of conduct.

Knowing this post was coming up, I’ve spent the past few months wondering what I’d write when the time came. I didn’t want it to be yet another look back at the past 20 years. The history I have included I’ve done so to provide some context.

No, I wanted this to be what this journey has been like for me. There is one thing about having an editorial deadline that forces you to come up with something to write about every week or two. It compels you to pay attention. It also forces you to think. The person I am now – what I believe and how I think about both people and technology – has been shaped in no small part by writing these 1000 posts over the past 20 years.

So, If I started as a humanist and technophile, what am I now, 20 years later? That is a very tough question to answer. I am much more pessimistic now. And this post has forced me to examine the causes of my pessimism.

I realized I am still a humanist. I still believe that if I’m face to face with a stranger, I’ll always place my bet on them helping me if I need it. I have faith that it will pay off more often than it won’t. If anything, we humans may be just a tiny little bit better than we were 20 years ago: a little more compassionate, a little more accepting, a little more kind.

So, if humans haven’t changed, what has? Why do I have less faith in the future than I did 20 years ago? Something has certainly changed. But what was it, I wondered?

Coincidentally, as I was thinking of this, I was also reading the late Philip Zimbardo’s book – The Lucifer Effect: Understanding How Good People Turn Evil. Zimbardo was the researcher who oversaw the Stanford Prison Experiment, where ordinary young men were randomly assigned roles as guards or inmates in a makeshift prison set up in a Stanford University basement. To make a long story short – ordinary people started doing such terrible things that they had to cut the experiment short after just 6 days.

 Zimbardo reminded me that people are usually not dispositionally completely good or bad, but we can find ourselves in situations that can push us in either direction. We all have the capacity to be good or evil. Our behavior depends on the environment we function in. To use an analogy Zimbardo himself used, it may not be the apples that are bad. It could be the barrel.

So I realized, it isn’t people who have changed in the last 20 years, but the environment we live in. And a big part of that environment is the media landscape we have built in those two decades. That landscape looks nothing like it did back in 2004.  With the help of technology, we have built an information landscape that doesn’t really play to the strengths of humanity. It almost always shows us the worst side of ourselves. Journalism has been replaced by punditry. Dialogue and debate have been pushed out of the way by demagoguery and divisiveness.

So yes, I’m more pessimistic now that I was when I started this journey 20 years ago. But there is a glimmer of hope here. If people had truly changed, there is not a lot we can do about that. But if it’s the media landscape that’s changed, that’s a different story. Because we built it, we can also fix it.

It’s something I’ll be thinking about as I start a new year.

Why The World No Longer Makes Sense

Does it seem that the world no longer makes sense? That may not just be you. The world may in fact no longer be making sense.

In the late 1960s, psychologist Karl Weick introduced the world to the concept of sensemaking, but we were making sense of things long before that. It’s the mental process we go through to try to reconcile who we believe we are to the world in which we find ourselves.  It’s how we give meaning to our life.

Weick identified 7 properties critical to the process of sensemaking. I won’t mention them all, but here are three that are critical to keep in mind:

  1. Who we believe we are forms the foundation we use to make sense of the world
  2. Sensemaking needs retrospection. We need time to mull over new information we receive and form it into a narrative that makes sense to us.
  3. Sensemaking is a social activity. We look for narratives that seem plausible, and when we find them, we share them with others.

I think you see where I’m going with this. Simply put, our ability to make sense of the world is in jeopardy, both for internal and external reasons.

External to us, the quality of the narratives that are available to us to help us make sense of the world has nosedived in the past two decades. Prior to social media and the implosion of journalism, there was a baseline of objectivity in the narratives we were exposed to. One would hope that there was a kernel of truth buried somewhere in what we heard, read or saw on major news providers.

But that’s not the case today. Sensationalism has taken over journalism, driven by the need for profitability by showing ads to an increasingly polarized audience. In the process, it’s dragged the narratives we need to make sense of the world to the extremes that lie on either end of common sense.

This wouldn’t be quite as catastrophic for sensemaking if we were more skeptical. The sensemaking cycle does allow us to judge the quality of new information for ourselves, deciding whether it fits with our frame of what we believe the world to be, or if we need to update that frame. But all that validation requires time and cognitive effort. And that’s the second place where sensemaking is in jeopardy: we don’t have the time or energy to be skeptical anymore. The world moves too quickly to be mulled over.

In essence, our sensemaking is us creating a model of the world that we can use without requiring us to think too much. It’s our own proxy for reality. And, as a model, it is subject to all the limitations that come with modeling. As the British statistician George E.P. Box said, “All models are wrong, but some are useful.”

What Box didn’t say is, the more wrong our model is, the less likely it is to be useful. And that’s the looming issue with sensemaking. The model we use to determine what is real is become less and less tethered to actual reality.

It was exactly that problem that prompted Daniel Schmachtenberger and others to set up the Consilience Project. The idea of the Project is this – the more diversity in perspectives you can include in your model, the more likely the model is to be accurate. That’s what “consilience” means: pulling perspectives from different disciplines together to get a more accurate picture of complex issues.  It literally means the “jumping together” of knowledge.

The Consilience Project is trying to reverse the erosion of modern sensemaking – both from an internal and external perspective – that comes from the overt polarization and the narrowing of perspective that currently typifies the information sources we use in our own sensemaking models.  As Schmachtenberger says,  “If there are whole chunks of populations that you only have pejorative strawman versions of, where you can’t explain why they think what they think without making them dumb or bad, you should be dubious of your own modeling.”

That, in a nutshell, explains the current media landscape. No wonder nothing makes sense anymore.