Being in the Room Where It Happens

I spent the past weekend attending a conference that I had helped to plan. As is now often the case, this was a hybrid conference; you could choose to attend in person or online via Zoom. Although it involved a long plane ride, I choose to attend in person. It could be because – as a planner – I wanted to see how the event played out. Also, it’s been a long time since I attended a conference away from my home. Or – maybe – it was just FOMO.

Whatever the reason, I’m glad I was there, in the room.

This was a very small conference planned on a shoestring budget. We didn’t have money for extensive IT support or AV equipment. We were dependent solely on a laptop and whatever sound equipment our host was able to supply. We knew going into the conference that this would make for a less-than-ideal experience for those attending virtually. But – even accounting for that – I found there was a huge gap in the quality of that experience between those that were there and those that were attending online. And, over the duration of the 3-day conference, I observed why that might be so.

This conference was a 50/50 mix of those that already knew each other and those that were meeting each other for the first time. Even those who were familiar with each other tended to connect more often via a virtual meeting platform than in a physical meeting space. I know that despite the convenience and efficiency of being able to meet online, something is lost in the process. After the past two days, carefully observing what was happening in the room we were all in, I have a better understanding of what that loss might be – it was the vague and inexact art of creating a real bond with another person.

In that room, the bonding didn’t happen at the speaking podium and very seldom happened during the sessions we so carefully planned. It seeped in on the sidelines, over warmed-over coffee from conference centre urns, overripe bananas and the detritus of the picked over pastry tray. The bonding came from all of us sharing and digesting a common experience. You could feel a palpable energy in the room. You could pick up the emotion, read the body language and tune in to the full bandwidth of communication that goes far beyond what could be transmitted between an onboard microphone and a webcam.

But it wasn’t just the sharing of the experience that created the bonds. It was the digesting of those experiences after the fact. We humans are herding animals, and that extends to how we come to consensus about things we go through together. We do so through communication with others – not just with words and gesture, but also through the full bandwidth of our evolved mechanisms for coming to a collective understanding. It wasn’t just that a camera and microphone couldn’t transmit that effectively, it was that it happened where there was no camera or mic.

As researchers have discovered, there is a lived reality and a remembered reality and often, they don’t look very much alike. The difference between the effectiveness of an in-person experience and one accessed through an online platform shouldn’t come as a surprise to us. This is due to how our evolved sense-making mechanisms operate. We make sense of reality both internally, through a comparison with our existing cognitive models and externally, through interacting with others around us who have shared that same reality. This communal give-and-take colors what we take with us, in the form of both memories and an updated model of what we know and believe. When it comes to how humans are built, collective sense making is a feature, not a bug.

I came away from that conference with much more than the content that was shared at the speaker dais. I also came away with a handful of new relationships, built on sharing an experience and, through that, laying down the first foundations of trust and familiarity. I would not hesitate to reach out to any of these new friends if I had a question about something or a project I felt they could collaborate on.

I think that’s true largely because I was in the room where it happened.

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

Bots and Agents – The Present and Future of A.I.

This past weekend I got started on a website I told a friend I’d help him build. I’ve been building websites for over 30 years now, but for this one, I decided to use a platform that was new to me. Knowing there would be a significant learning curve, my plan was to use the weekend to learn the basics of the platform. As is now true everywhere, I had just logged into the dashboard when a window popped up asking if I wanted to use their new AI co-pilot to help me plan and build the website.

“What the hell?” I thought, “Let’s take it for a spin!” Even if it could lessen the learning curve a little bit, it could still save me dozens of hours. The promise given me was intriguing – the AI co-pilot would ask me a few questions and then give me back the basic bones of a fully functional website. Or, at least, that’s what I thought.

I jumped on the chatbot and started typing. With each question, my expectations rose. It started with the basics: what were we selling, what were our product categories, where was our market? Soon, though, it started asking me what tone of voice I wanted, what was our color scheme, what search functionality was required, were there any competitor’s sites that we liked or disliked, and if so, what specifically did we like or dislike?  As I plugged my answers, I wondered what exactly I would get back.

The answer, as it turned out, was not much. As I was reassured that I had provided a strong enough brief for an excellent plan, I clicked the “finalize” button and waited. And waited. And waited. The ellipse below my last input just kept fading in and out. Finally, I asked, “Are you finished yet?” I was encouraged to just wait a few more minutes as it prepared a plan guaranteed to amaze.

Finally – ta da! – I got the “detailed web plan.” As far as I can tell, it had simply sucked in my input and belched it out again, formatted as a bullet list. I was profoundly underwhelmed.

Going into this, I had little experience with AI. I have used it sparingly for tasks that tend to have a well-defined scope. I have to say, I have been impressed more often than I have been disappointed, but I haven’t really kicked the tires of AI.

Every week, when I sit down to write this post, Microsoft Co-Pilot urges me to let it show what it can do. I have resisted, because when I do ask AI to write something for me, it reads like a machine did it. It’s worded correctly and usually gets the facts right, but there is no humanness in the process. One thing I think I have is an ability to connect the dots – to bring together seemingly unconnected examples or thoughts and hopefully join them together to create a unique perspective. For me, AI is a workhorse that can go out and gather the information in a utilitarian manner, but somewhere in the mix, a human is required to add the spark of intuition or inspiration. For now, anyway.

Meet Agentic AI

With my recent AI debacle still fresh in my mind, I happened across a blog post from Bill Gates. It seems I thought I was talking to an AI “Agent” when, in fact, I was chatting with a “Bot.” It’s agentic AI that will probably deliver the usefulness I’ve been looking for for the last decade and a half.

As it turns out, Gates was at least a decade and a half ahead of me in that search. He first talked about intelligent agents in his 1995 book The Road Ahead. But it’s only now that they’ve become possible, thanks to advances in AI. In his post, Gate’s describes the difference between Bots and Agents: “Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.”

This is exactly the “app-ssistant” I first described in 2010 and have returned to a few times since, even down to using the same example Bill Gates did – planning a trip. This is what I was expecting when I took the web-design co-pilot for a test flight. I was hoping that – even if it couldn’t take me all the way from A to Z – it could at least get me to M. As it turned out, it couldn’t even get past A. I ended up exactly where I started.

But the day will come. And, when it does, I have to wonder if there will still be room on the flight for we human passengers?

The Double-Edged Sword of a “Doer” Society

Ask anyone who comes from somewhere else to the United States what attracted them. The most common answer is “because anything is possible here.” The U.S. is a nation of “doers”. It has been that promise that has attracted wave after wave of immigration, made of those chafing at the restraints and restrictions of their homelands. The concept of getting things done was embodied in Robert F. Kennedy’s famous speech, “Some men see things as they are and ask why? I dream of things that never were and ask why not?” The U.S. – more than anywhere else in the world – is the place to make those dreams come true.

But that comes with some baggage. Doers are individualists by definition. They are driven by what they can accomplish, by making something from nothing. And with that becomes an obsessive focus on time. When we have so much that we can do, we constantly worry about losing time. Time becomes one of the few constraints in a highly individualistic society.

But the US is not just individualistic. There are other countries that score highly on individualistic traits, including Australia, the U.K., New Zealand and my own home, Canada. But the U.S. is different, in that It’s also vertically individualistic – it is a highly hierarchal society obsessed with personal achievement. And – in the U.S. – achievement is measured in dollars and cents. In a Freakonomics podcast episode, Gert Jan Hofstede, a professor of artificial sociality in the Netherlands, called out this difference: “When you look at cultures like New Zealand or Australia that are more horizontal in their individualism, if you try to stand out there, they call it the tall poppy syndrome. You’re going to be shut down.”

In the U.S., tall poppies are celebrated and given god-like status. The ultra rich are recognized as the ideal to be aspired to. And this creates a problem in a nation of doers. If wealth is the ultimate goal, anything that stands between us and that goal is an obstacle to be eliminated.

When Breaking the Rules becomes The Rule

“Move fast and break things” – Mark Zuckerberg

In most societies, equality and fairness are the guardrails of governance. It was the U.S. that enshrined these in their constitution. Making sure things are fair and equal requires the establishment of rules of law and the setting of social norms.  But in the U.S., the breaking of rules is celebrated if it’s required to get things done. From the same Freakonomics podcast, Michele Gelfand, a professor of Organizational Behavior at Standford, said, “In societies that are tighter, people are willing to call out rule violators. Here in the U.S., it’s actually a rule violation to call out people who are violating norms. “

There is an inherent understanding in the US that sometimes trade-offs are necessary to achieve great things. It’s perhaps telling that Meta CEO Mark Zuckerberg is fascinated by the Roman emperor Augustus, a person generally recognized by history as gaining his achievements by inflicting some significant societal costs, including the subjugation of conquered territories and a brutal and systematic elimination of any opponents. This is fully recognized and embraced by Zuckerberg, who has said of his historic hero, ““Basically, through a really harsh approach, he established 200 years of world peace. What are the trade-offs in that? On the one hand, world peace is a long-term goal that people talk about today …(but)…that didn’t come for free, and he had to do certain things”.

Slipping from Entrepreneurialism to Entitlement

A reverence for “doing” can develop a toxic side when it becomes embedded in a society. In many cases, entrepreneurialism and entitlement are two different sides of the same coin. In a culture where entrepreneurial success is celebrated and iconized by media, the focus of entrepreneurialism can often shift from trying to profitably solve a problem to simply just profiting. Chasing wealth becomes the singular focus of “doing”.  in a society that has always encouraged everyone to chase their dreams, no matter the cost, it can create an environment where the Tragedy of the Commons is repeated over and over again.

This creates a paradox – a society that celebrates extreme wealth without seeming to realize that the more that wealth is concentrated in the hands of the few, the less there is for everyone else. Simple math is not the language of dreams.

To return to Augustus for a moment, we should remember that he was the one responsible for dismantling an admittedly barely functioning republic and installing himself as the autocratic emperor by doing away with democracy, consolidating power in his own hands and gutting Rome’s constitution.

Bread and Circuses: A Return to the Roman Empire?

Reality sucks. Seriously. I don’t know about you, but increasingly, I’m avoiding the news because I’m having a lot of trouble processing what’s happening in the world. So when I look to escape, I often turn to entertainment. And I don’t have to turn very far. Never has entertainment been more accessible to us. We carry entertainment in our pocket. A 24-hour smorgasbord of entertainment media is never more than a click away. That should give us pause, because there is a very blurred line between simply seeking entertainment to unwind and becoming addicted to it.

Some years ago I did an extensive series of posts on the Psychology of Entertainment. Recently, a podcast producer from Seattle ran across the series when he was producing a podcast on the same topic and reached out to me for an interview. We talked at length about the ubiquitous nature of entertainment and the role it plays in our society. In the interview, I said, “Entertainment is now the window we see ourselves through. It’s how we define ourselves.”

That got me to thinking. If we define ourselves through entertainment, what does that do to our view of the world? In my own research for this column, I ran across another post on how we can become addicted to entertainment. And we do so because reality stresses us out, “Addictive behavior, especially when not to a substance, is usually triggered by emotional stress. We get lonely, angry, frustrated, weary. We feel ‘weighed down’, helpless, and weak.”

Check. That’s me. All I want to do is escape reality. The post goes on to say, “Escapism only becomes a problem when we begin to replace reality with whatever we’re escaping to.”

I believe we’re at that point. We are cutting ties to reality and replacing them with a manufactured reality coming from the entertainment industry. In 1985 – forty years ago – author and educator Neil Postman warned us in his book Amusing Ourselves to Death that we were heading in this direction. The calendar had just ticked past the year 1984 and the world collectively sighed in relief that George Orwell’s eponymous vision from his novel hadn’t materialized. Postman warned that it wasn’t Orwell’s future we should be worried about. It was Aldous Huxley’s forecast in Brave New World that seemed to be materializing:

“As Huxley remarked in Brave New World Revisited, the civil libertarians and rationalists who are ever on the alert to oppose tyranny “failed to take into account man’s almost infinite appetite for distractions…  Orwell feared that what we fear will ruin us. Huxley feared that what we desire will ruin us.”

Postman was worried then – 40 years ago – that the news was more entertainment than information. Today, we long for even the kind of journalism that Postman was already warning us about. He would be aghast to see what passes for news now. 

While things unknown to Postman (social media, fake news, even the internet) are throwing a new wrinkle in our downslide into an entertainment induced coma, it’s not exactly new.   This has happened at least once before in history, but you have to go back almost 2000 years to find an example. Near the end of the Western Roman Empire, as it was slipping into decline, the Roman poet Juvenal used a phrase that summed it up – panem et circenses – “bread and circuses”:

“Already long ago, from when we sold our vote to no man, the People have abdicated our duties; for the People who once upon a time handed out military command, high civil office, legions — everything, now restrains itself and anxiously hopes for just two things: bread and circuses.”

Juvenal was referring to the strategy of the Roman emperors to provide free wheat and circus games and other entertainment games to gain political power. In an academic article from 2000, historian Paul Erdkamp said the ploy was a “”briberous and corrupting attempt of the Roman emperors to cover up the fact that they were selfish and incompetent tyrants.”

Perhaps history is repeating itself.

One thing we touched on in the podcast was a noticeable change in the entertainment industry itself. Scarlett Johansenn noticed the 2025 Academy Awards ceremony was a much more muted affair than in years past. There was hardly any political messaging or sermons about how entertainment provided a beacon of hope and justice. In an interview with Vanity Fair  – Johanssen mused that perhaps it’s because almost all the major studies are now owned by Big-Tech Billionaires, “These are people that are funding studios. It’s all these big tech guys that are funding our industry, and funding the Oscars, and so there you go. I guess we’re being muzzled in all these different ways, because the truth is that these big tech companies are completely enmeshed in all aspects of our lives.”

If we have willingly swapped entertainment for reality, and that entertainment is being produced by corporations who profit from addicting as many eyeballs as possible, prospects for the future do not look good.

We should be taking a lesson from what happened to Imperial Rome.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

Strategies for Surviving the News

When I started this post, I was going to unpack some of the psychology behind the consumption of the news. I soon realized that the topic is far beyond the confines of this post to realistically deal with. So I narrowed my focus to this – which has been very top of mind for me lately – how do you stay informed without becoming a trembling psychotic mess? How do you arm yourself for informed action rather than becoming paralyzed into inaction by the recent fire hose of sheer WTF insanity that makes up the average news feed.

Pick Your Battles

There are few things more debilitating to humans than fretting about things we can’t do anything about. Research has found a strong correlation between depression and our locus of control – the term psychologists use for the range of things we feel we can directly impact. There is actually a term for being so crushed by bad news that you lose the perspective needed to function in your own environment. It’s called Mean World Syndrome.

If effecting change is your goal, decide what is realistically within your scope of control. Then focus your information gathering on those specific things. When it comes to informing yourself to become a better change agent, going deep rather than wide might be a better strategy.

Be Deliberate about Your Information Gathering

The second strategy goes hand in hand with the first. Make sure you’re in the right frame of mind to gather information. There are two ways the brain processes information: top-down and bottom-up. Top-down processing is cognition with purpose – you have set an intent and you’re working to achieve specific goals. Bottom up is passively being exposed to random information and allowing your brain to be stimulated by it. The way you interpret the news will be greatly impacted by whether you’re processing it with a “top-down” intent or letting your brain parse it from the “bottom-up”

By being more deliberate in gathering information with a specific intent in mind, you completely change how your brain will process the news. It will instantly put it in a context related to your goal rather than let it rampage through our brains, triggering our primordial anxiety circuits.

Understand the Difference between Signal and Noise

Based on the top two strategies, you’ve probably already guessed that I’m not a big fan of relying on social media as an information source. And you’re right. A brain doom scrolling through a social media feed is not a brain primed to objectively process the news.

Here is what I did. For the broad context, I picked two international information sources I trust to be objective: The New York Times and the Economist out of the U.K. I subscribed to both because I wanted sources that weren’t totally reliant on advertising as a revenue source (a toxic disease which is killing true journalism). For Americans, I would highly recommend picking at least one source outside the US to counteract the polarized echo chamber that typifies US journalism, especially that which is completely ad supported.

Depending on your objectives, include sources that are relevant to those objectives. If local change is your goal, make sure you are informed about your community. With those bases in place, even If you get sucked down a doom scrolling rabbit hole, at least you’ll have a better context to allow you to separate signal from noise.

Put the Screen Down

I realize that the majority of people (about 54% of US Adults according to Pew Research) will simply ignore all of the above and continue to be informed through their Facebook or X feeds. I can’t really change that.

But for the few of you out there that are concerned about the direction the world seems to be spinning and want to filter and curate your information sources to effect some real change, these strategies may be helpful.

For my part, I’m going to try to be much more deliberate in how I find and consume the news.  I’m also going to be more disciplined about simply ignoring the news when I’m not actively looking for it. Taking a walk in the woods or interacting with a real person are two things I’m going to try to do more.

Getting Tired of Trying to tell the Truth

It’s not always easy writing these weekly posts. I try to deal with things of consequence, and usually I choose things that may be negative in nature. I also try to learn a little bit more about these topics by doing some research and approaching the post in a thoughtful way.  All of this means I have gone down several depressing rabbit holes in the course of writing these pieces over the years.

I have to tell you that, cumulatively, it takes a toll. Some weeks, it can only be described as downright depressing. And that’s just for myself, who only does these once a week. What if this were my full-time job? What if I were a journalist reporting on an ever more confounding world? How would I find the motivation to do my job every day?

The answer, at least according to a recent survey of 402 journalists by PR industry platform creator Muck Rake, is that I could well be considering a different job. Last year, 56% of those journalists considered quitting.

The reasons are many. I and others have repeatedly talked the dismal state of journalism in North America. The advertising-based economic model that supports true reporting is falling apart. Publishers have found that it’s more profitable to pander to prejudice and preconceived beliefs than it is to actually try to report the truth and hope to change people’s minds. When it comes to journalism, it appears that Colonel Nathan R. Jessup (from the movie A Few Good Men) may have been right. We can’t handle the truth. We prefer to spoon fed polarized punditry that aligns with our beliefs. When profitability is based on the number of eye-balls gained, you get a lot more of them at a far lower cost by peddling opinion rather than proof. This has led to round after round of mass layoffs, cutting newsroom staffing by double digit percentages.

This reality brings a crushing load of economic pressure down on journalists. According to the Muck Rake survey, most journalist are battling burnout due to working longer hours with fewer resources. But it’s not just the economic restraints that are taking their toll on journalists. A good part of the problem is the evolving nature of how news develops and propagates through our society.

There used to be such a thing as a 24-hour news cycle which was defined by a daily publication deadline, whether that was the printing of a newspaper or the broadcast of the nightly news. As tight as 24 hours was, it was downright leisurely compared to the split-second reality of today’s information environment. New stories develop, break and fade from significance in minutes now rather than days or weeks as was true in the past. And that means that a journalist that hopes to keep up always has to be on. There is no such thing as downtime or being “off the grid.” Even with new tools and platforms to help monitor and filter the tidal wave of signal vs. noise that is today’s information ecosystem, a journalist always has to be plugged in and logged on to do their job.

That is exhausting.

But perhaps the biggest reason why journalists are considering a career change is not the economic constraints nor the hours worked. It’s the nature of the job itself. No one choses to be a journalist because they want to get rich. It’s a career built on passion. Good journalists want to do something significant and make a difference. They do it because they value objectivity and truth and believe that by reporting it, they can raise the level of thought and discourse in our society. Given the apparent dumpster fire that seems to sum up the world today, can you blame them for becoming disillusioned with their chosen career?

All of this is tremendously sad. But even more than that, it is profoundly frightening. In a time when we need more reliably curated, reliably reported information about the state of affairs than ever before, those we have always trusted to provide this are running – in droves – towards the nearest exit.

My 1000th Post – and My 20 Year Journey

Note: This week marks the 1000th post I’ve written for MediaPost. For this blog, all of those posts are here, plus a number that I’ve written for other publications and exclusively for Out of My Gord. But the sentiments here apply to all those posts. If you’re wondering, I’ve written 1233 posts in total.

According to the MediaPost search tool, this is my 1000th post for this publication. There are a few duplicates in there, but I’m not going to quibble. No matter how you count them up, that’s a lot of posts.

My first post was written on August 19th, 2004. Back then I wrote exclusively for the emerging search industry. Google was only 6 years old.  They had just gone public, with investors hoping to cash in on this new thing called paid search. Social media was even greener. There was no Facebook. Something called Myspace had launched the year before.

In the 20 years I’ve written for MediaPost, I’ve bounced from masthead to masthead. My editorial bent evolved from being Search industry specific to eventually find my sweet spot, which I found at the intersection of human behavior and technology.

It’s been a long and usually interesting journey. When I started, I was the parent of two young children who I dragged along to industry events, using the summer search conference in San Jose as an opportunity to take a family camping vacation. I am now a grandfather, and I haven’t been to a digital conference for almost 10 years (the last being the conferences I used to host and program for the good folks here at MediaPost).

When I started writing these posts, I was both a humanist and a technophile. I believed that people were inherently good, and that technology would be the tool we would use to be better. The Internet was just starting to figure out how to make money, but it was still idealistic enough that people like me believed it would be mostly a good thing. Google still had the phrase “Don’t be Evil” as part of its code of conduct.

Knowing this post was coming up, I’ve spent the past few months wondering what I’d write when the time came. I didn’t want it to be yet another look back at the past 20 years. The history I have included I’ve done so to provide some context.

No, I wanted this to be what this journey has been like for me. There is one thing about having an editorial deadline that forces you to come up with something to write about every week or two. It compels you to pay attention. It also forces you to think. The person I am now – what I believe and how I think about both people and technology – has been shaped in no small part by writing these 1000 posts over the past 20 years.

So, If I started as a humanist and technophile, what am I now, 20 years later? That is a very tough question to answer. I am much more pessimistic now. And this post has forced me to examine the causes of my pessimism.

I realized I am still a humanist. I still believe that if I’m face to face with a stranger, I’ll always place my bet on them helping me if I need it. I have faith that it will pay off more often than it won’t. If anything, we humans may be just a tiny little bit better than we were 20 years ago: a little more compassionate, a little more accepting, a little more kind.

So, if humans haven’t changed, what has? Why do I have less faith in the future than I did 20 years ago? Something has certainly changed. But what was it, I wondered?

Coincidentally, as I was thinking of this, I was also reading the late Philip Zimbardo’s book – The Lucifer Effect: Understanding How Good People Turn Evil. Zimbardo was the researcher who oversaw the Stanford Prison Experiment, where ordinary young men were randomly assigned roles as guards or inmates in a makeshift prison set up in a Stanford University basement. To make a long story short – ordinary people started doing such terrible things that they had to cut the experiment short after just 6 days.

 Zimbardo reminded me that people are usually not dispositionally completely good or bad, but we can find ourselves in situations that can push us in either direction. We all have the capacity to be good or evil. Our behavior depends on the environment we function in. To use an analogy Zimbardo himself used, it may not be the apples that are bad. It could be the barrel.

So I realized, it isn’t people who have changed in the last 20 years, but the environment we live in. And a big part of that environment is the media landscape we have built in those two decades. That landscape looks nothing like it did back in 2004.  With the help of technology, we have built an information landscape that doesn’t really play to the strengths of humanity. It almost always shows us the worst side of ourselves. Journalism has been replaced by punditry. Dialogue and debate have been pushed out of the way by demagoguery and divisiveness.

So yes, I’m more pessimistic now that I was when I started this journey 20 years ago. But there is a glimmer of hope here. If people had truly changed, there is not a lot we can do about that. But if it’s the media landscape that’s changed, that’s a different story. Because we built it, we can also fix it.

It’s something I’ll be thinking about as I start a new year.

Why The World No Longer Makes Sense

Does it seem that the world no longer makes sense? That may not just be you. The world may in fact no longer be making sense.

In the late 1960s, psychologist Karl Weick introduced the world to the concept of sensemaking, but we were making sense of things long before that. It’s the mental process we go through to try to reconcile who we believe we are to the world in which we find ourselves.  It’s how we give meaning to our life.

Weick identified 7 properties critical to the process of sensemaking. I won’t mention them all, but here are three that are critical to keep in mind:

  1. Who we believe we are forms the foundation we use to make sense of the world
  2. Sensemaking needs retrospection. We need time to mull over new information we receive and form it into a narrative that makes sense to us.
  3. Sensemaking is a social activity. We look for narratives that seem plausible, and when we find them, we share them with others.

I think you see where I’m going with this. Simply put, our ability to make sense of the world is in jeopardy, both for internal and external reasons.

External to us, the quality of the narratives that are available to us to help us make sense of the world has nosedived in the past two decades. Prior to social media and the implosion of journalism, there was a baseline of objectivity in the narratives we were exposed to. One would hope that there was a kernel of truth buried somewhere in what we heard, read or saw on major news providers.

But that’s not the case today. Sensationalism has taken over journalism, driven by the need for profitability by showing ads to an increasingly polarized audience. In the process, it’s dragged the narratives we need to make sense of the world to the extremes that lie on either end of common sense.

This wouldn’t be quite as catastrophic for sensemaking if we were more skeptical. The sensemaking cycle does allow us to judge the quality of new information for ourselves, deciding whether it fits with our frame of what we believe the world to be, or if we need to update that frame. But all that validation requires time and cognitive effort. And that’s the second place where sensemaking is in jeopardy: we don’t have the time or energy to be skeptical anymore. The world moves too quickly to be mulled over.

In essence, our sensemaking is us creating a model of the world that we can use without requiring us to think too much. It’s our own proxy for reality. And, as a model, it is subject to all the limitations that come with modeling. As the British statistician George E.P. Box said, “All models are wrong, but some are useful.”

What Box didn’t say is, the more wrong our model is, the less likely it is to be useful. And that’s the looming issue with sensemaking. The model we use to determine what is real is become less and less tethered to actual reality.

It was exactly that problem that prompted Daniel Schmachtenberger and others to set up the Consilience Project. The idea of the Project is this – the more diversity in perspectives you can include in your model, the more likely the model is to be accurate. That’s what “consilience” means: pulling perspectives from different disciplines together to get a more accurate picture of complex issues.  It literally means the “jumping together” of knowledge.

The Consilience Project is trying to reverse the erosion of modern sensemaking – both from an internal and external perspective – that comes from the overt polarization and the narrowing of perspective that currently typifies the information sources we use in our own sensemaking models.  As Schmachtenberger says,  “If there are whole chunks of populations that you only have pejorative strawman versions of, where you can’t explain why they think what they think without making them dumb or bad, you should be dubious of your own modeling.”

That, in a nutshell, explains the current media landscape. No wonder nothing makes sense anymore.