We’re Constantly Rewriting Our History

“No man ever steps into the same river twice, because it is not the same river, and he is not the same man.” – Heraclitus

Time is a funny thing. It is fluid and flowing and ever changing. It’s no surprise than that The Greek philosopher Heraclitus tried to describe it by using the analogy of a river. He then doubled down on the theme of change by saying it wasn’t only the river that was constantly changing. It was also the person stepping in the river. With time, nothing ever stays static. To try to capture the present we inhabit is simply taking a snapshot in time, from one of a million different vantage points.

This is also true when we look backwards. Like time itself, our history does not stay static. It is constantly being rewritten, depending on when and where we are and what our view of our own reality is. The past is constantly in flux – eternally in the process of being rewritten using the lens of today’s culture and political reality to interpret what happened yesterday.

This is happening everywhere.

Right now, in the occupied parts of Ukraine, school history curriculums are being rewritten en masse to conform to a Kremlin approved version of the past dictated by Moscow’s Ministry of Enlightenment. References to Ukraine and Kyiv are being edited out. There are numerous mentions of Putin as the savior of the area’s true Russian heritage. Teachers who try to remain pro-Ukrainian are being threatened with deportation, forcing them into hiding or being sent for “re-training.”

Here in Canada, the country’s history that is being taught in schools today bears scant resemblance to the history I learned as a child some six decades ago. The colonial heroes of the past (almost all of English, Scottish or French descent) are being re-examined in the light of our efforts to reconcile ourselves to our true history. What we know now were that many of the historic heroes we used to name universities after and erect statues to honor were astoundingly racist and complicit in a planned program of cultural eradication against our indigenous population.

And in the US, the MAGA-fication of cultural and heritage institutions is proceeding at a breakneck pace. Trump has tacked his name onto the Kennedy Centre. The White House is in the process of being “bedazzled” into a grotesque version of its former stately self, cloaked in a design sensibility more suitable for a 17th century French Sun King.

Perhaps the most overt example of rewriting history came with an executive order issued last year with the title “Restoring Truth and Sanity to American History.” This little Orwellian gem gives J.D. Vance (who sits on the Smithsonian’s Board of Regents) the power to eliminate “improper, divisive or anti-American ideology” from the museums and related centers. The inconvenient bits of history that this order aims to sweep under the carpet include slavery and the U.S.’s own sordid history of colonialism. These things have been determined to be “un-American.”

Compare all of this to the mission statement of the Smithsonian, which is to “increase and diffuse knowledge, providing Americans and the world with the tools and information they need to forge Our Shared Future.”

I wholeheartedly agree with that mission. I have said that we need to know our past to know what we aspire to be in the future. But that comes with a caveat; you have to embrace the past – as near as you’re able – for what it truly was, warts and all. Historians have an obligation to not whitewash the past. But we also must realize that actions we abhor today took place within a social context that made them more permissible – or even lauded – at the time. It is a historian’s job to record the past faithfully but also to interprete it given the societal and cultural context of the present.

This is the balancing act that historians have to engage in we’re truly going to use the past as something we can learn from.

Singing in Unison

It’s the year-end so it’s time to reflect and also to look forward, carrying what we’ve learned in the past into an uncertain future.

Let me share one thing I’ve learned; we have to get serious about how we create community. And by community, I will use a very specific definition. In fact, perhaps it would be more accurate to replace “community” with “choir.”

Let me explain my thought with a story.

In the late1980’s, Harvard professor Bob Putnam was in Italy doing research. He was studying Italy’s regional decentralization of power which began in 1970. For a political scientist like Putnam, this was an opportunity that didn’t come often. Italy had passed power down to its 20 regional governments and had also established a single framework for administration and governance. This framework was the constant. The variables were the people, the societal environment and the nature of the regions themselves. If anyone is familiar with Italy, you know that there are vast differences between these regions, especially from the north to the south.

For Bob Putnam, he looked at how effective each administrative government was – was democracy working in the region? Even though the administrators were all referring to the same playbook, the results were all over the map – literally. Generally speaking, governance in Northern and Central Italy was much more effective than in the South.

For Putnam, the next big question was – why? What was it about some regions that made democracy work better than in others. He looked for correlations in the reams of data he had collected. Was it education? Wealth? Occupational breakdowns? In examining each factor, he found some correlation, but they all came short of the perfect positive relationship he was looking for.

Finally, he took a break from the analysis and drove into the country with his wife, Rosemary.  Stopping in one town, he heard music coming from a small church, so the two stepped inside. There, a volunteer choir was singing.  It may sound cliché, but in that moment, Bob Putnam had an epiphany. Perhaps the answer lay in people coming together, engaging in civic activities and creating what is called “social capital” by working together as a group.

Maybe democracy works best in places where people actually want to sing together.

Bob Putnam relooked at the numbers and, sure enough, there was almost a perfect correlation. The regions that had the most clubs, civic groups, social organizations and – yes – choral societies also had the highest degree of democratic effectiveness. This set Putnam on a path that would lead to the publishing of this work in 1993 under the title of Making Democracy Work along with his subsequent 2000 best seller, Bowling Alone. (If you’d like to know more about Bob Putnam, check out the excellent Netflix documentary, Join or Die).

Putnam showed it’s better to belong – but that only explains part of the appeal of a choir. There has to be something special about singing together.

Singing as a group is one of those cultural universals; people do it everywhere in the world. And we’ve been doing it for ever, since before we started recording our history. Modern science has now started to discover why. Singing as a group causes the brain to release oxytocin – christened the “moral” molecule by neuro-economist Paul J. Zak – by the bucketload. Zak explains the impact of this chemical, “When oxytocin is raised, people are more generous, they’re more compassionate and, in particular, they’re empathetic – they connect better to people emotionally.”

The results of an oxytocin high are the creation of the building blocks of trust and social capital. People who sing together treat each other better. Our brains start something tuning into other brains through something called neural synchrony. We connect with other people in a profoundly and beautifully irrational way that burrows down through our consciousness to a deeply primal level.

But there is also something else going on here that, while not unique to singing together, finds a perfect home in your typical community choir.

Sociologist Émile Durkheim found that groups that do the same thing at the same time experience something called “collective effervescence.” This is the feeling of being “carried away” and being part of a whole that is greater than the sum of its parts. You find it in religious ceremonies, football stadiums, rock concerts and – yes – you’ll find it in choirs.

So, if singing together is so wonderful, why are we doing it less and less? When was the last time you sang – really sang, not just moved your lips – with others? For myself, it’s beyond the limits of my own memory. Maybe it was when I was still a kid. And I suspect the reason I haven’t sang out loud since is because someone, somewhere along the line, told me I can’t sing.

But that’s not the point. Singing shouldn’t be competitive. It should be spiritual. We shouldn’t judge ourselves against singers we see on media. This never used to be the case.  It’s just one more example of how we can never be good enough – at anything – if we use media for our mirror.

So, in 2026, I’m going to try singing more. Care to join me?

Home Movies: The Medium of Memories

Media is a word that is used a lot, especially in my past industry of advertising, but we may not stop to think about the origin of the word itself. Media is the plural of medium, and in our particular context, medium is defined as “the intervening substance through which impressions are conveyed to the senses.”

When defined this way, media are powerful stuff. Let me give you a personal example.

At a recent family gathering a few cousins were talking about old 8 mm home movies. Some of you know what I’m talking about. You might even have some yourself, stuck somewhere in your attic or basement. They came in yellow-orange boxes from Kodak and might have “Kodachrome II” on the front. In my case, I had some which I salvaged from my mom during her transfer to her care facility. Two of my cousins similarly took custody of their films from their respective mothers. I packed what I could of these in my suitcase and gingerly transported them home, after trying to explain what they were to a curious TSA official and why they couldn’t go through an X-Ray scanner.

When I got them home, I transferred them to digital. Then, starting December 1st, I have been sharing small snippets of the resulting videos with the rest of my family, one a day in a type of home movie Advent Calendar.

Most of these home movies were shot between the mid 1950’s and mid 1960’s; capturing picnics, weekends at the family cottage north of Toronto, weddings, birthdays, going away parties, Christmases and other assorted occasions. I’ll soon tell you what this sharing of one particular medium has meant to my family and I, but first I want to give you a little background on 8 mm home movies, because I think it helps to understand why they were such an important medium.

The 8 mm format was introduced by Kodak in 1932. It was actually a 16mm format that had to be flipped and run through the camera twice. In processing, the film would be split and spliced together to create a 50 ft reel, capturing about 3 to 4 minutes.

Kodak hoped to extend the ability to make movies to the home market, but between the Great Depression and World War II, the format didn’t gain real traction until the post-war consumerism boom. Then, thanks to smaller cameras that were easier to use and improved picture quality, 8 mm movie cameras became more common place and started showing up at family gatherings, weddings, honeymoons, vacations and other notable events.

It would have been in the mid 1950’s that my mother’s family bought their first cameras. My grandfather and grandmother, a few great uncles and my mom and dad all became amateur movie makers. Suddenly, many family events became multi-camera shoots.

It was the results of this movie making boom in my family that I recently started digging through, rounding up those little yellow boxes, delicately threading the fragile film into a digital scanning system and letting grainy and poorly lit moving pictures transport me back to a time I had only heard stories about before.

Let me tell you what that meant to my family and myself. I never met my maternal grandfather (or my paternal one either, but that’s another story for another time). He passed away two weeks after I was born. I also never knew my father. He tragically died when I was just one year old. These were two man I desperately wanted to know, but never had the chance. I only knew them through still photos and stories passed on from older family members.

But suddenly, there they were; moving, laughing and living. My grandfather teasing my grandmother mercilessly and then sitting back in his easy chair with a big smile on his face as he watched his family around him. My father at his and my mom’s wedding, holding a huge cigar in one hand while he picked confetti out of his hair with the other. “My God!,” I thought, “he stands just like me!”

This medium, long forgotten as it sat in dusty boxes, brought my grandfather and father back to life for me. It colored in the outline sketches I had of who they were. For my family, these movies reconnected us to our younger selves, brought loved ones back, introduced the younger members to their direct ancestors and – for myself and others – shed new light on figures in our past that had been shrouded in the shadows of time.

Because of this project, two things became clear to me. First of all, if you have also inherited old media filled with family memories, find the time to transfer them into a medium that allows them to be shared and preserved for the future in some type of transferable format. The act of archiving brings up images of bespectacled staff peering over dusty tomes and pulling forgotten boxes from the top shelf. But it is simply the act of imbuing the past with a type of permanence so it always remains accessible.

Secondly, recognize the importance of any type of medium that captures the moments of our lives. Rick Prelinger, an archivist in California, has compiled a collection of over 30,000 home movies. He published a list of 22 reasons why home movies are important. For me, number 21 resonated most deeply: “showing and reusing (these movies) today invests audiences with the feeling that their own lives are also worth recording.”

I’m sure my dad or granddad had no idea of their own impending mortality when they were captured on these movies. They weren’t planning on being memorialized. They didn’t realize the importance of the moment – or the medium.

But today, these movies are one of the all-too-rare things we have to remember who they were. For me, it was this medium that erased the time and distance between my senses, here at the end of 2025, and that day in June, 1957 – the day my parents got married.

Thank Heavens someone was there with a camera.

Being in the Room Where It Happens

I spent the past weekend attending a conference that I had helped to plan. As is now often the case, this was a hybrid conference; you could choose to attend in person or online via Zoom. Although it involved a long plane ride, I choose to attend in person. It could be because – as a planner – I wanted to see how the event played out. Also, it’s been a long time since I attended a conference away from my home. Or – maybe – it was just FOMO.

Whatever the reason, I’m glad I was there, in the room.

This was a very small conference planned on a shoestring budget. We didn’t have money for extensive IT support or AV equipment. We were dependent solely on a laptop and whatever sound equipment our host was able to supply. We knew going into the conference that this would make for a less-than-ideal experience for those attending virtually. But – even accounting for that – I found there was a huge gap in the quality of that experience between those that were there and those that were attending online. And, over the duration of the 3-day conference, I observed why that might be so.

This conference was a 50/50 mix of those that already knew each other and those that were meeting each other for the first time. Even those who were familiar with each other tended to connect more often via a virtual meeting platform than in a physical meeting space. I know that despite the convenience and efficiency of being able to meet online, something is lost in the process. After the past two days, carefully observing what was happening in the room we were all in, I have a better understanding of what that loss might be – it was the vague and inexact art of creating a real bond with another person.

In that room, the bonding didn’t happen at the speaking podium and very seldom happened during the sessions we so carefully planned. It seeped in on the sidelines, over warmed-over coffee from conference centre urns, overripe bananas and the detritus of the picked over pastry tray. The bonding came from all of us sharing and digesting a common experience. You could feel a palpable energy in the room. You could pick up the emotion, read the body language and tune in to the full bandwidth of communication that goes far beyond what could be transmitted between an onboard microphone and a webcam.

But it wasn’t just the sharing of the experience that created the bonds. It was the digesting of those experiences after the fact. We humans are herding animals, and that extends to how we come to consensus about things we go through together. We do so through communication with others – not just with words and gesture, but also through the full bandwidth of our evolved mechanisms for coming to a collective understanding. It wasn’t just that a camera and microphone couldn’t transmit that effectively, it was that it happened where there was no camera or mic.

As researchers have discovered, there is a lived reality and a remembered reality and often, they don’t look very much alike. The difference between the effectiveness of an in-person experience and one accessed through an online platform shouldn’t come as a surprise to us. This is due to how our evolved sense-making mechanisms operate. We make sense of reality both internally, through a comparison with our existing cognitive models and externally, through interacting with others around us who have shared that same reality. This communal give-and-take colors what we take with us, in the form of both memories and an updated model of what we know and believe. When it comes to how humans are built, collective sense making is a feature, not a bug.

I came away from that conference with much more than the content that was shared at the speaker dais. I also came away with a handful of new relationships, built on sharing an experience and, through that, laying down the first foundations of trust and familiarity. I would not hesitate to reach out to any of these new friends if I had a question about something or a project I felt they could collaborate on.

I think that’s true largely because I was in the room where it happened.

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

Bots and Agents – The Present and Future of A.I.

This past weekend I got started on a website I told a friend I’d help him build. I’ve been building websites for over 30 years now, but for this one, I decided to use a platform that was new to me. Knowing there would be a significant learning curve, my plan was to use the weekend to learn the basics of the platform. As is now true everywhere, I had just logged into the dashboard when a window popped up asking if I wanted to use their new AI co-pilot to help me plan and build the website.

“What the hell?” I thought, “Let’s take it for a spin!” Even if it could lessen the learning curve a little bit, it could still save me dozens of hours. The promise given me was intriguing – the AI co-pilot would ask me a few questions and then give me back the basic bones of a fully functional website. Or, at least, that’s what I thought.

I jumped on the chatbot and started typing. With each question, my expectations rose. It started with the basics: what were we selling, what were our product categories, where was our market? Soon, though, it started asking me what tone of voice I wanted, what was our color scheme, what search functionality was required, were there any competitor’s sites that we liked or disliked, and if so, what specifically did we like or dislike?  As I plugged my answers, I wondered what exactly I would get back.

The answer, as it turned out, was not much. As I was reassured that I had provided a strong enough brief for an excellent plan, I clicked the “finalize” button and waited. And waited. And waited. The ellipse below my last input just kept fading in and out. Finally, I asked, “Are you finished yet?” I was encouraged to just wait a few more minutes as it prepared a plan guaranteed to amaze.

Finally – ta da! – I got the “detailed web plan.” As far as I can tell, it had simply sucked in my input and belched it out again, formatted as a bullet list. I was profoundly underwhelmed.

Going into this, I had little experience with AI. I have used it sparingly for tasks that tend to have a well-defined scope. I have to say, I have been impressed more often than I have been disappointed, but I haven’t really kicked the tires of AI.

Every week, when I sit down to write this post, Microsoft Co-Pilot urges me to let it show what it can do. I have resisted, because when I do ask AI to write something for me, it reads like a machine did it. It’s worded correctly and usually gets the facts right, but there is no humanness in the process. One thing I think I have is an ability to connect the dots – to bring together seemingly unconnected examples or thoughts and hopefully join them together to create a unique perspective. For me, AI is a workhorse that can go out and gather the information in a utilitarian manner, but somewhere in the mix, a human is required to add the spark of intuition or inspiration. For now, anyway.

Meet Agentic AI

With my recent AI debacle still fresh in my mind, I happened across a blog post from Bill Gates. It seems I thought I was talking to an AI “Agent” when, in fact, I was chatting with a “Bot.” It’s agentic AI that will probably deliver the usefulness I’ve been looking for for the last decade and a half.

As it turns out, Gates was at least a decade and a half ahead of me in that search. He first talked about intelligent agents in his 1995 book The Road Ahead. But it’s only now that they’ve become possible, thanks to advances in AI. In his post, Gate’s describes the difference between Bots and Agents: “Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.”

This is exactly the “app-ssistant” I first described in 2010 and have returned to a few times since, even down to using the same example Bill Gates did – planning a trip. This is what I was expecting when I took the web-design co-pilot for a test flight. I was hoping that – even if it couldn’t take me all the way from A to Z – it could at least get me to M. As it turned out, it couldn’t even get past A. I ended up exactly where I started.

But the day will come. And, when it does, I have to wonder if there will still be room on the flight for we human passengers?

The Double-Edged Sword of a “Doer” Society

Ask anyone who comes from somewhere else to the United States what attracted them. The most common answer is “because anything is possible here.” The U.S. is a nation of “doers”. It has been that promise that has attracted wave after wave of immigration, made of those chafing at the restraints and restrictions of their homelands. The concept of getting things done was embodied in Robert F. Kennedy’s famous speech, “Some men see things as they are and ask why? I dream of things that never were and ask why not?” The U.S. – more than anywhere else in the world – is the place to make those dreams come true.

But that comes with some baggage. Doers are individualists by definition. They are driven by what they can accomplish, by making something from nothing. And with that becomes an obsessive focus on time. When we have so much that we can do, we constantly worry about losing time. Time becomes one of the few constraints in a highly individualistic society.

But the US is not just individualistic. There are other countries that score highly on individualistic traits, including Australia, the U.K., New Zealand and my own home, Canada. But the U.S. is different, in that It’s also vertically individualistic – it is a highly hierarchal society obsessed with personal achievement. And – in the U.S. – achievement is measured in dollars and cents. In a Freakonomics podcast episode, Gert Jan Hofstede, a professor of artificial sociality in the Netherlands, called out this difference: “When you look at cultures like New Zealand or Australia that are more horizontal in their individualism, if you try to stand out there, they call it the tall poppy syndrome. You’re going to be shut down.”

In the U.S., tall poppies are celebrated and given god-like status. The ultra rich are recognized as the ideal to be aspired to. And this creates a problem in a nation of doers. If wealth is the ultimate goal, anything that stands between us and that goal is an obstacle to be eliminated.

When Breaking the Rules becomes The Rule

“Move fast and break things” – Mark Zuckerberg

In most societies, equality and fairness are the guardrails of governance. It was the U.S. that enshrined these in their constitution. Making sure things are fair and equal requires the establishment of rules of law and the setting of social norms.  But in the U.S., the breaking of rules is celebrated if it’s required to get things done. From the same Freakonomics podcast, Michele Gelfand, a professor of Organizational Behavior at Standford, said, “In societies that are tighter, people are willing to call out rule violators. Here in the U.S., it’s actually a rule violation to call out people who are violating norms. “

There is an inherent understanding in the US that sometimes trade-offs are necessary to achieve great things. It’s perhaps telling that Meta CEO Mark Zuckerberg is fascinated by the Roman emperor Augustus, a person generally recognized by history as gaining his achievements by inflicting some significant societal costs, including the subjugation of conquered territories and a brutal and systematic elimination of any opponents. This is fully recognized and embraced by Zuckerberg, who has said of his historic hero, ““Basically, through a really harsh approach, he established 200 years of world peace. What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today …(but)…that didn’t come for free, and he had to do certain things”.

Slipping from Entrepreneurialism to Entitlement

A reverence for “doing” can develop a toxic side when it becomes embedded in a society. In many cases, entrepreneurialism and entitlement are two different sides of the same coin. In a culture where entrepreneurial success is celebrated and iconized by media, the focus of entrepreneurialism can often shift from trying to profitably solve a problem to simply just profiting. Chasing wealth becomes the singular focus of “doing”.  in a society that has always encouraged everyone to chase their dreams, no matter the cost, it can create an environment where the Tragedy of the Commons is repeated over and over again.

This creates a paradox – a society that celebrates extreme wealth without seeming to realize that the more that wealth is concentrated in the hands of the few, the less there is for everyone else. Simple math is not the language of dreams.

To return to Augustus for a moment, we should remember that he was the one responsible for dismantling an admittedly barely functioning republic and installing himself as the autocratic emperor by doing away with democracy, consolidating power in his own hands and gutting Rome’s constitution.

Bread and Circuses: A Return to the Roman Empire?

Reality sucks. Seriously. I don’t know about you, but increasingly, I’m avoiding the news because I’m having a lot of trouble processing what’s happening in the world. So when I look to escape, I often turn to entertainment. And I don’t have to turn very far. Never has entertainment been more accessible to us. We carry entertainment in our pocket. A 24-hour smorgasbord of entertainment media is never more than a click away. That should give us pause, because there is a very blurred line between simply seeking entertainment to unwind and becoming addicted to it.

Some years ago I did an extensive series of posts on the Psychology of Entertainment. Recently, a podcast producer from Seattle ran across the series when he was producing a podcast on the same topic and reached out to me for an interview. We talked at length about the ubiquitous nature of entertainment and the role it plays in our society. In the interview, I said, “Entertainment is now the window we see ourselves through. It’s how we define ourselves.”

That got me to thinking. If we define ourselves through entertainment, what does that do to our view of the world? In my own research for this column, I ran across another post on how we can become addicted to entertainment. And we do so because reality stresses us out, “Addictive behavior, especially when not to a substance, is usually triggered by emotional stress. We get lonely, angry, frustrated, weary. We feel ‘weighed down’, helpless, and weak.”

Check. That’s me. All I want to do is escape reality. The post goes on to say, “Escapism only becomes a problem when we begin to replace reality with whatever we’re escaping to.”

I believe we’re at that point. We are cutting ties to reality and replacing them with a manufactured reality coming from the entertainment industry. In 1985 – forty years ago – author and educator Neil Postman warned us in his book Amusing Ourselves to Death that we were heading in this direction. The calendar had just ticked past the year 1984 and the world collectively sighed in relief that George Orwell’s eponymous vision from his novel hadn’t materialized. Postman warned that it wasn’t Orwell’s future we should be worried about. It was Aldous Huxley’s forecast in Brave New World that seemed to be materializing:

“As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions…  Orwell feared that what we fear will ruin us. Huxley feared that what we desire will ruin us.”

Postman was worried then – 40 years ago – that the news was more entertainment than information. Today, we long for even the kind of journalism that Postman was already warning us about. He would be aghast to see what passes for news now. 

While things unknown to Postman (social media, fake news, even the internet) are throwing a new wrinkle in our downslide into an entertainment induced coma, it’s not exactly new.   This has happened at least once before in history, but you have to go back almost 2000 years to find an example. Near the end of the Western Roman Empire, as it was slipping into decline, the Roman poet Juvenal used a phrase that summed it up – panem et circenses – “bread and circuses”:

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously hopes for just two things: bread and circuses.”

Juvenal was referring to the strategy of the Roman emperors to provide free wheat and circus games and other entertainment games to gain political power. In an academic article from 2000, historian Paul Erdkamp said the ploy was a “”briberous and corrupting attempt of the Roman emperors to cover up the fact that they were selfish and incompetent tyrants.”

Perhaps history is repeating itself.

One thing we touched on in the podcast was a noticeable change in the entertainment industry itself. Scarlett Johansenn noticed the 2025 Academy Awards ceremony was a much more muted affair than in years past. There was hardly any political messaging or sermons about how entertainment provided a beacon of hope and justice. In an interview with Vanity Fair  – Johanssen mused that perhaps it’s because almost all the major studies are now owned by Big-Tech Billionaires, “These are people that are funding studios. It’s all these big tech guys that are funding our industry, and funding the Oscars, and so there you go. I guess we’re being muzzled in all these different ways, because the truth is that these big tech companies are completely enmeshed in all aspects of our lives.”

If we have willingly swapped entertainment for reality, and that entertainment is being produced by corporations who profit from addicting as many eyeballs as possible, prospects for the future do not look good.

We should be taking a lesson from what happened to Imperial Rome.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

Strategies for Surviving the News

When I started this post, I was going to unpack some of the psychology behind the consumption of the news. I soon realized that the topic is far beyond the confines of this post to realistically deal with. So I narrowed my focus to this – which has been very top of mind for me lately – how do you stay informed without becoming a trembling psychotic mess? How do you arm yourself for informed action rather than becoming paralyzed into inaction by the recent fire hose of sheer WTF insanity that makes up the average news feed.

Pick Your Battles

There are few things more debilitating to humans than fretting about things we can’t do anything about. Research has found a strong correlation between depression and our locus of control – the term psychologists use for the range of things we feel we can directly impact. There is actually a term for being so crushed by bad news that you lose the perspective needed to function in your own environment. It’s called Mean World Syndrome.

If effecting change is your goal, decide what is realistically within your scope of control. Then focus your information gathering on those specific things. When it comes to informing yourself to become a better change agent, going deep rather than wide might be a better strategy.

Be Deliberate about Your Information Gathering

The second strategy goes hand in hand with the first. Make sure you’re in the right frame of mind to gather information. There are two ways the brain processes information: top-down and bottom-up. Top-down processing is cognition with purpose – you have set an intent and you’re working to achieve specific goals. Bottom up is passively being exposed to random information and allowing your brain to be stimulated by it. The way you interpret the news will be greatly impacted by whether you’re processing it with a “top-down” intent or letting your brain parse it from the “bottom-up”

By being more deliberate in gathering information with a specific intent in mind, you completely change how your brain will process the news. It will instantly put it in a context related to your goal rather than let it rampage through our brains, triggering our primordial anxiety circuits.

Understand the Difference between Signal and Noise

Based on the top two strategies, you’ve probably already guessed that I’m not a big fan of relying on social media as an information source. And you’re right. A brain doom scrolling through a social media feed is not a brain primed to objectively process the news.

Here is what I did. For the broad context, I picked two international information sources I trust to be objective: The New York Times and the Economist out of the U.K. I subscribed to both because I wanted sources that weren’t totally reliant on advertising as a revenue source (a toxic disease which is killing true journalism). For Americans, I would highly recommend picking at least one source outside the US to counteract the polarized echo chamber that typifies US journalism, especially that which is completely ad supported.

Depending on your objectives, include sources that are relevant to those objectives. If local change is your goal, make sure you are informed about your community. With those bases in place, even If you get sucked down a doom scrolling rabbit hole, at least you’ll have a better context to allow you to separate signal from noise.

Put the Screen Down

I realize that the majority of people (about 54% of US Adults according to Pew Research) will simply ignore all of the above and continue to be informed through their Facebook or X feeds. I can’t really change that.

But for the few of you out there that are concerned about the direction the world seems to be spinning and want to filter and curate your information sources to effect some real change, these strategies may be helpful.

For my part, I’m going to try to be much more deliberate in how I find and consume the news.  I’m also going to be more disciplined about simply ignoring the news when I’m not actively looking for it. Taking a walk in the woods or interacting with a real person are two things I’m going to try to do more.