Meta’s Social Media Battle Plan

My fellow Media Insider Maarten Albarda called it the “The Big Tobacco Moment for Social Media” in his post last week. Then, just yesterday, Steve Rosenbaum added that the K.G.M v. Meta Platforms case “signals a shift that cuts directly through the core defense platforms have relied on for decades.”

It was a seismic decision, and I’m pretty sure the various conference rooms of 1 Meta Way, Menlo Park, California have the doors closed as a bunch of sweaty lawyers and Meta staff are rolling out the whiteboards (or the Meta Quest virtual reality equivalent) and rolling up their sleeves to assess the potential damage and draw up a battle plan. Let’s take a moment to speculate about what they may be talking about.

In at least one of those conference rooms, Meta’s legal team is assessing one line of defence, which I’ll call Project “Hail Mary,” tapping into the current pop culture Zeitgeist. This involves an appeal to the $6 million decision. It’s not this case that’s worrying them. It’s the thousands waiting in the queue for the legal precedent to be set. The Meta Legal Team will be spending much of their foreseeable future in a courtroom. Even they know that chances for a successful appeal are slim. 

The second line of defence is to quantify the impact of this on Meta’s bottom line if the appeal is not successful. So let’s unpack that, because it deals with the elephant in the room, touched on in both Steve and Maarten’s post: Is this the beginning of a slippery slope that will lead to the dismantling of algorithmic ad targeting and the demise of the endless scroll for everyone, or just legal minors? 

If we follow the lead of Australia, the first country to implement a ban on social media, it will just be minors – those under 16. The legislation was passed late last year and the ban officially took place on December 10, 2025. 

There are several countries around the world looking at implementing a similar ban, including Canada. Most are watching to see how Australia implements and polices its ban, as there are several thorny issues at play here. The countries seriously looking at it tend to share a similar legislative sentiment with Australia when it comes to consumer rights and privacy concerns. 

The U.S., under the current administration, is the least likely to implement federal restrictions on social media. Still, that is not keeping several states from introducing their own legislation. What the K.M.G. v. Meta decision does do is move the debate from the arena of federally controlled media to that of state controlled online safety, privacy and mental health concerns. All will be watching the pending suits, which will likely fill up dockets in U.S. courts for the next few years at least. 

Given the international aspect of this, it’s instructive to look at how Meta’s revenues breakdown by region. 

The biggest share, 39%, is the U.S. and Canada, but 94% of that comes from the U.S. We’re a Meta rounding error up here.

The Asia-Pacific is the second biggest regional market – with 26.8% of global revenues. While the user numbers are huge, the revenue per user is much smaller than in North America. Several countries in this market are considering some type of age-based restriction on social media usage – largely driven by the academic concerns of parents and educators in China, Japan and Korea.

Next is Europe, with 23.2% of Meta’s revenue pie. If there is any jurisdiction likely to follow Australia’s lead, it’s the E.U., who have consistently shown leadership in implementing privacy protection legislation.

Finally, there is the rest of the world, which collectively accounts for about 11% of Meta revenues. When you consider this includes all of Africa, all South America and whatever else is left, you can appreciate that attitudes towards legislation will be all over the map, both literally and figuratively.

Still, let’s say that a significant chunk of Meta’s revenue – say about 30 to 40% – comes from regions likely to pass legislation similar to Australia’s. Still, that undoubtedly will be only directed at minors younger than 16, which today makes up less than 10% of Meta’s user base (between Instagram and Facebook). All those young people have gone to TikTok (where it makes up 25% of their user base). 

So, what Meta’s financial planners are probably talking about is the fact that – even in a worst legal case scenario – we’re talking about 3 to 4% of their total user base that may be legislatively restricted in some form or another. If you’re in triage mode, that’s not severe enough to consider major surgery or amputation. Probably a band-aid will do the trick. 

The Most Canadian of Social Networks

It may be the most polite social network in the world. It’s Hey.Cafe – a Facebook alternative built by Canadians for Canadians.

I first heard about Hey.Cafe through a reel on Facebook (oh, the irony) from Tod Maffin, a former CBC radio host, author and podcaster. Prompted by the not so veiled threats coming from south of the border, Tod’s been on a “buy Canadian” campaign for several months now and that has recently extended to Canadian alternatives for the big social media platforms. It was Tod that suggested to every Canadian listening (currently about 10,000,000 a week, according to Tod’s website) that we check out Hey.Cafe.

So, I did. It turned out that Anthony Lee, the creator of Hey.Cafe, lives about an hour down the highway from me, here in the heart of beautiful British Columbia. So I reached out and we had a chat – a nice, polite Canadian chat. Because that’s how we do things up here.

The first thing I learned, which was a surprise, is that Hey.Cafe is not new. In fact, it’s been around since 2001. That means there was a version of Hey.Cafe before there was ever a Facebook (which started in 2004). In addition to running a tech support company out of Penticton, BC, Anthony has been developing alternatives to the major social media platforms for the better part of 3 decades now, “Whenever I thought, ‘Oh, I think I have an idea,’ I’d make some changes, that kind of stuff. But it definitely wasn’t a sit down and work on it all day thing, unless I had some time free that I was just like, ‘Yeah, I’ll spend this week working on stuff.’”

Then I asked the obvious question, “Why now? Why is Hey.Cafe suddenly gaining attention?”

There is the “buy Canadian” thing, of course. But Anthony said it’s more than just Canadians being fed up with an American president and his bluster. We’re also fed up with social media founders that have their noses firmly pressed up against said President’s posterior simply because it’s good for business.

And let’s not even get into the simmering cesspool every major social media platform has become, driven by an ad-obsessed business model that monetizes eyeballs at the expense of ethics. Lee concurred, “It’s all about algorithm for them. They don’t care if it’s someone you follow or not. If, if it looks like it’s gonna make some attention, whether it be good or bad, they’re gonna push it in the feed.”

So, are Canadian’s kicking Hey.Cafes tires like a rink-side Zamboni? Yes, finally. Thanks to the plug from Tod Maffin, users shot up from about 5,000 to over 40,000 in two weeks. And it’s still growing. Because it’s still a side of the desk project, Anthony had to cap new accounts at 250 an hour.

Now, those numbers are infinitesimal compared to any of the major platforms, but they do signal a willingness by Canadians to try something not tied to business practices we don’t agree with. At the same time, it does bring up the elephant in the room for anyone going up against Facebook or any of the big platforms – the curse of Metcalfe’s Law. Metcalfe’s Law – named after Ethernet pioneer Robert Metcalfe – says that the value of a network is proportional to the square of the number of connected users. Having a telephone isn’t much use if no one else has one. For networks, bigger = better. And Facebook is currently 75,000 times bigger than Hey.Cafe.

Given that, does Hey.Cafe stand a chance? I hope so. I supported it with a one-year subscription because I would love to see Anthony Lee’s side project survive and – hopefully – succeed. I did go on and post a few things. I even started a new “café” – Hey.Cafe’s version of a Facebook Group. So far, nothing much has happened there, but we’ll politely wait and see. Again, that’s how we do things up here.

What I did find, however, is a community that seems genuinely, politely happy to be there. And not all of them are Canadian. This was a post from a nurse newly arrived from the U.S.: “Newly landed nurse practitioner from Oregon via Boston (long story). Love the concept of no ads and AI. Now to find some other communities, Bernese Mountain Dogs and skiing!”

I did ask Anthony, given the audience MediaPost (where this post also runs) reaches, if there’s any message he’d like to pass on. For media buyers especially, he offered this, “Whether it be HeyCafe, Bluesky, Mastodon, (consider) using more services that aren’t the big three players. Use more stuff that puts you in the spotlight of communities that are all over the place.”

While Anthony would love for Hey.Cafe to be economically sustainable, maybe the take-away here is not so much about financial success. Maybe these are Canadians signalling a change in our attitude. It’s as if we’ve been in an abusive relationship with Facebook for years but have put up with it because it’s been too hard to leave. But, at some point in abusive relationships, there comes a red line which, when crossed, you begin planning your exit. It doesn’t happen immediately. It may not happen at all, but there is a significant mental shift that happens where you become aware of how toxic the relationship really is and you start planning a life free from that toxicity.

For 40,000 Canadians and wannabe Canadians – at least – that switch may have happened.

How Seniors Get Sucked into Falling for Bad Information

It happened to me last Thursday. I was tired, I was jet lagged and I was feeling like garbage. My defenses were down. So, before I realized it, I was spinning down a social media sewer spiral. My thumb took over, doom scrolling through post after post offering very biased commentary on the current state of the world, each reinforcing just how awful things are. Little was offered in the way of factual back up, and I didn’t bother looking for it. My mood plummeted. I alternated between paranoia, outrage and depression. An hour flew by as my brain was hijacked by a feckless feed.

And I know better. I really do. Up in my prefrontal cortex, I knew I was being sucked into a vicious vortex of AI slop and troll baiting. Each time I scrolled down, I would tell myself, “Okay, this is the last one. After this, put the phone down.” And each time, my thumb would ignore me.

This is not news to any of us. Every one of you reading this knows about the addictive nature of social media. And you also know the pernicious impacts of AI generated content spoon fed to us by an algorithm whose sole purpose is to hog tie our own willpower and keep our eyes locked on the screen. I also suspect that you, like I, think because we know all this, we have built up at least some immunity to the siren call of social media.

But I’m here to tell you that social media has gotten really, really good at being really, really awful for us. I didn’t notice it so much when I was on my game, busy doing other things and directing my attention with a fully functional executive brain. But the minute my guard slipped, the minute my cognitive capacity shifted down into a lower gear, I was sucked into the misinformational sh*thole that is social media.

Being a guy that likes to ask why, I did exactly that when the jet lag finally dissipated. Why did I, a person who should know better, fall into the crappy content trap?  “Maybe,” I said to myself reluctantly, “it’s a generational thing.” Maybe brains of a certain age are more susceptible to being cognitively hijacked and led astray.

A recent study from the University of Utah does lend some credence to that theory. Researchers found that adults older than 60 were more likely to share misinformation online than younger people. This was true for information about health, but a prior study showed an even higher tendency to swallow bad information when it came to politics.

Lead researcher Ben Lyons set out to find why those of us north of 60 are more likely to be led astray by online misinformation. Spoiler alert – it doesn’t have anything to do with our brains slowing down or lower information literacy rates. It appears that older people can sniff out bullshit just as well as younger people. But it turns out that if that information, no matter how dubious it is, matches our own beliefs and world view, we’ll happily share it even if it doesn’t pass the smell test.

Lyons called this congeniality bias. I’ve talked before about the sensemaking cycle. In it, new information is matched to our existing belief schema. It it’s a match, we usually accept it without a lot of qualification. If it isn’t, we can choose to reject it or we can reframe our beliefs based on the new information. The second option is a lot more work and, it seems, the older we get the less likely we are to do this heavy lifting. As we age, we get more fully locked into who we are and what we believe. We’ve spent a lot of years building our beliefs and so we’re reluctant to stray from them.

Of course, like all things human, this tendency is not a given nor universally applied. Some older people are naturally more skeptical, and some are more inflexible in their beliefs. Not surprisingly, Lyons found those that leaned right in their political affiliations tend to be more belief-bound.

But, as I discovered this past Thursday, these information filtering tendencies are dependent on our moods and cognitive capacity. I am a naturally skeptical person and like to think I’m usually pretty picky about my information sources. But this is true only when I’m on my game. The minute my brain down-shifted, I began accepting dubious information at face value simply because I happened to agree with it. I didn’t bother checking to make sure it was true.

It sounded true, and that was all that mattered.

Happy 25th Birthday, Wikipedia!

Wikipedia is perhaps the last remaining vestige of the Internet we thought we’d build, two and a half decades ago. It was born of the same stuff that fueled open-source software and freeware, open access to knowledge and a democratization of data. This was part of the Internet that was supposed to make the world a fairer and more knowledgeable place, narrowing the gap between the haves and have nots. It was an “information superhighway” that would connect the global village and, according to the McGraw-Hill Computer Desktop Encyclopedia of 2001, “help all citizens regardless of their income level.”

We know better now. But despite the Internet’s hard pivot towards capitalism, Wikipedia is still around. It just celebrated its 25th birthday a few weeks ago. According to Wikipedia itself, there are 18 edits to its content every second from Wikipedians from all over the world. There are versions in over 300 different languages, and all of this receives 10,000 page views every second. There are over 7 million articles in the English version, and 500 new articles are added per day. In the last 25 years, almost 12 million users have edited the English Wikipedia at least once.

This was not what Jimmy Wales and Larry Sanger envisioned in 2001, when they started Wikipedia. It was just supposed to be a collaborative sandbox that would allow for editing and drafting of articles which would then be included in their other project, Nupedia. Nupedia was more centrally controlled and structured. This side project used the wiki platform developed by Ward Cunningham in 1994. Wiki is Hawaiian for “quick” and Cunningham thought it had a little more panache than just calling his platform something like “Quickweb.”

The concept behind wikis is all about creating and empowering collaborative communities, opening the platform up to anyone who wanted to contribute. Wales and Sanger believed this would be a perfect way to quickly draft new entries at scale, but they still envisioned themselves and a team of editors as the gatekeepers who would control what would show up in Nupedia. But the pace of contribution soon outstripped the ability of Nupedia’s editorial team to keep up. The decision was made completely open the doors to contribution and make Wikipedia the end destination.

This completely open concept was a preview of what was to come. It may have been the one of the first times we saw what would become a common theme: a web-based platform unleashing the potential of a latent market by connecting an open community of suppliers (in this case, editors and contributors) and an audience of consumers at scale. It would be repeated by Uber, AirBnB and others.

The difference with Wikipedia was that – in this case – no one was making any money. The information was free. As a comparison, the competitor, the online version of Encyclopedia Brittanica, charged a yearly subscription of $50.

This upset of the information market didn’t go down well with everyone. This was especially true for academics and researchers. Students were warned not to use Wikipedia as a source. It was roundly criticized for its open nature and lack of peer review. To this day, much of the academic community still looks down its nose at Wikipedia, even though at least one academic study has shown that Wikipedia’s accuracy is on a par with the Encyclopedia Britannica and far outstrips it terms of the number of entries and the sheer breadth of content. This ongoing hostility towards Wikipedia is unfortunate, because the very same audience that sneers at it could be its most valuable contributors, especially in their own areas of expertise.

Of course, part of this lingering resentment could come in part due to the glacially-slow resistance to change from academic publishers, many of whom are still clinging to exorbitant subscription models. These publishers are resisting to the bitter end writer and iconoclast Stewart Brand’s feeling that “information wants to be free.”

Despite all this, Wikipedia has not only survived but thrived. It is still very much a part of the online information ecosystem, 25 years after its birth. And yes, it might be an anachronistic and naïve throwback to a more idealistic time, but it has proven at least one maxim of the open-source community. Eric. S. Raymond, in his seminal and prescient essay, The Cathedral and the Bazaar, called this maxim Linus’s Law, named after Linus Torvalds, the creator of the Linux kernel. The law states, “Given enough eyeballs, all bugs are shallow.”  

Or, to paraphrase, “Given enough eyeballs, most Wikipedia entries are mostly accurate.”

We’re Constantly Rewriting Our History

“No man ever steps into the same river twice, because it is not the same river, and he is not the same man.” – Heraclitus

Time is a funny thing. It is fluid and flowing and ever changing. It’s no surprise than that The Greek philosopher Heraclitus tried to describe it by using the analogy of a river. He then doubled down on the theme of change by saying it wasn’t only the river that was constantly changing. It was also the person stepping in the river. With time, nothing ever stays static. To try to capture the present we inhabit is simply taking a snapshot in time, from one of a million different vantage points.

This is also true when we look backwards. Like time itself, our history does not stay static. It is constantly being rewritten, depending on when and where we are and what our view of our own reality is. The past is constantly in flux – eternally in the process of being rewritten using the lens of today’s culture and political reality to interpret what happened yesterday.

This is happening everywhere.

Right now, in the occupied parts of Ukraine, school history curriculums are being rewritten en masse to conform to a Kremlin approved version of the past dictated by Moscow’s Ministry of Enlightenment. References to Ukraine and Kyiv are being edited out. There are numerous mentions of Putin as the savior of the area’s true Russian heritage. Teachers who try to remain pro-Ukrainian are being threatened with deportation, forcing them into hiding or being sent for “re-training.”

Here in Canada, the country’s history that is being taught in schools today bears scant resemblance to the history I learned as a child some six decades ago. The colonial heroes of the past (almost all of English, Scottish or French descent) are being re-examined in the light of our efforts to reconcile ourselves to our true history. What we know now were that many of the historic heroes we used to name universities after and erect statues to honor were astoundingly racist and complicit in a planned program of cultural eradication against our indigenous population.

And in the US, the MAGA-fication of cultural and heritage institutions is proceeding at a breakneck pace. Trump has tacked his name onto the Kennedy Centre. The White House is in the process of being “bedazzled” into a grotesque version of its former stately self, cloaked in a design sensibility more suitable for a 17th century French Sun King.

Perhaps the most overt example of rewriting history came with an executive order issued last year with the title “Restoring Truth and Sanity to American History.” This little Orwellian gem gives J.D. Vance (who sits on the Smithsonian’s Board of Regents) the power to eliminate “improper, divisive or anti-American ideology” from the museums and related centers. The inconvenient bits of history that this order aims to sweep under the carpet include slavery and the U.S.’s own sordid history of colonialism. These things have been determined to be “un-American.”

Compare all of this to the mission statement of the Smithsonian, which is to “increase and diffuse knowledge, providing Americans and the world with the tools and information they need to forge Our Shared Future.”

I wholeheartedly agree with that mission. I have said that we need to know our past to know what we aspire to be in the future. But that comes with a caveat; you have to embrace the past – as near as you’re able – for what it truly was, warts and all. Historians have an obligation to not whitewash the past. But we also must realize that actions we abhor today took place within a social context that made them more permissible – or even lauded – at the time. It is a historian’s job to record the past faithfully but also to interprete it given the societal and cultural context of the present.

This is the balancing act that historians have to engage in we’re truly going to use the past as something we can learn from.

The Cost of Not Being Curious

The world is having a pandemic-proportioned wave of Ostrichitis.

Now, maybe you haven’t heard of Ostrichitis. But I’m willing to bet you’re showing at least some of the symptoms:

  • Avoiding newscasts, especially those that feature objective and unbiased reporting
  • Quickly scrolling past any online news items in your feed that look like they may be uncomfortable to read
  • Dismissing out of hand information coming from unfamiliar sources

These are the signs of Ostrichitis – or the Ostrich Effect – and I have all of them. This is actually a psychological effect, more pointedly called willful ignorance, which I wrote about a few years ago. And from where I’m observing the world, we all seem to have it to one extent or another.

I don’t think this avoidance of information comes as a shock to anyone. The world is a crappy place right now. And we all seem to have gained comfort from adopting the folk wisdom that “no news is good news.” Processing bad news is hard work, and we just don’t have the cognitive resources to crunch through endless cycles of catastrophic news. If the bad news affirms our existing beliefs, it makes us even madder than what we were. If it runs counter to our beliefs, it forces us to spin up our sensemaking mechanisms and reframe our view of reality. Either way, there are way more fun things to do.

A recent study from the University of Chicago attempted to pinpoint when children started avoid bad news. The research team found that while young children don’t tend to put boundaries around their curiosity, as they age they start avoiding information that challenges their beliefs or their own well-being. The threshold seems to be about 6 years old. Before that, children are actively seeking information of all kinds (as any parent barraged by never ending “Whys” can tell you). After that, chidren start strategizing the types of information they pay attention to.

Now, like everything about humans, curiosity tends to be an individual thing. Some of us are highly curious and some of us avoid seeking new information religiously. But even if we are a curious sort, we may pick and choose what we’re curious about. We may find “safe zones” where we let our curiosity out to play. If things look too menacing, we may protect ourselves by curbing our curiosity.

The unfortunate part of this is that curiosity, in all its forms, is almost always a good thing for humans (even if it can prove fatal to cats).

The more curious we are, the better tied we are to reality. The lens we use to parse the world is something called a sense-making loop. I’ve often referred to this in the past. It’s a processing loop that compares what we experience with what we believe, referred to as our “frame”. For the curious, this frame is often updated to match what we experience. For the incurious, the frame is held on to stubbornly, often by ignoring new information or bending information to conform to their beliefs. A curious brain is a brain primed to grow and adapt. An incurious brain is one that is stagnant and inflexible. That’s why the father of modern-day psychology, William James, called curiosity “the impulse towards better cognition.”

When we think about the world we want, curiosity is a key factor in defining it. Curiosity keeps us moving forward. The lack of curiosity locks us in place or even pushes us backwards, causing the world to regress to a more savage and brutal place. Writers of dystopian fiction knew this. That’s why authors including H.G. Wells, Aldous Huxley, Ray Bradbury and George Orwell all made a lack of curiosity a key part of their bleak future worlds. Our current lack of curiosity is driving our world in the same dangerous direction.

For all these reasons, it’s essential that we stay curious, even if it’s becoming increasingly uncomfortable.

Being in the Room Where It Happens

I spent the past weekend attending a conference that I had helped to plan. As is now often the case, this was a hybrid conference; you could choose to attend in person or online via Zoom. Although it involved a long plane ride, I choose to attend in person. It could be because – as a planner – I wanted to see how the event played out. Also, it’s been a long time since I attended a conference away from my home. Or – maybe – it was just FOMO.

Whatever the reason, I’m glad I was there, in the room.

This was a very small conference planned on a shoestring budget. We didn’t have money for extensive IT support or AV equipment. We were dependent solely on a laptop and whatever sound equipment our host was able to supply. We knew going into the conference that this would make for a less-than-ideal experience for those attending virtually. But – even accounting for that – I found there was a huge gap in the quality of that experience between those that were there and those that were attending online. And, over the duration of the 3-day conference, I observed why that might be so.

This conference was a 50/50 mix of those that already knew each other and those that were meeting each other for the first time. Even those who were familiar with each other tended to connect more often via a virtual meeting platform than in a physical meeting space. I know that despite the convenience and efficiency of being able to meet online, something is lost in the process. After the past two days, carefully observing what was happening in the room we were all in, I have a better understanding of what that loss might be – it was the vague and inexact art of creating a real bond with another person.

In that room, the bonding didn’t happen at the speaking podium and very seldom happened during the sessions we so carefully planned. It seeped in on the sidelines, over warmed-over coffee from conference centre urns, overripe bananas and the detritus of the picked over pastry tray. The bonding came from all of us sharing and digesting a common experience. You could feel a palpable energy in the room. You could pick up the emotion, read the body language and tune in to the full bandwidth of communication that goes far beyond what could be transmitted between an onboard microphone and a webcam.

But it wasn’t just the sharing of the experience that created the bonds. It was the digesting of those experiences after the fact. We humans are herding animals, and that extends to how we come to consensus about things we go through together. We do so through communication with others – not just with words and gesture, but also through the full bandwidth of our evolved mechanisms for coming to a collective understanding. It wasn’t just that a camera and microphone couldn’t transmit that effectively, it was that it happened where there was no camera or mic.

As researchers have discovered, there is a lived reality and a remembered reality and often, they don’t look very much alike. The difference between the effectiveness of an in-person experience and one accessed through an online platform shouldn’t come as a surprise to us. This is due to how our evolved sense-making mechanisms operate. We make sense of reality both internally, through a comparison with our existing cognitive models and externally, through interacting with others around us who have shared that same reality. This communal give-and-take colors what we take with us, in the form of both memories and an updated model of what we know and believe. When it comes to how humans are built, collective sense making is a feature, not a bug.

I came away from that conference with much more than the content that was shared at the speaker dais. I also came away with a handful of new relationships, built on sharing an experience and, through that, laying down the first foundations of trust and familiarity. I would not hesitate to reach out to any of these new friends if I had a question about something or a project I felt they could collaborate on.

I think that’s true largely because I was in the room where it happened.

When Did the Future Become So Scary?

The TWA hotel at JFK airport in New York gives one an acute case of temporal dissonance. It’s a step backwards in time to the “Golden Age of Travel” – the 1960s. But even though you’re transported back 60 years, it seems like you’re looking into the future. The original space – the TWA Flight Center – was designed in 1962 by Eero Saarinen. This was a time when America was in love with the idea of the future. Science and technology were going to be our saving grace. The future was going to be a utopian place filled with flying jet cars, benign robots and gleaming, sexy white curves everywhere.  The TWA Flight Center was dedicated to that future.

It was part of our love affair with science and technology during the 60s. Corporate America was falling over itself to bring the space-age fueled future to life as soon as possible. Disney first envisioned the community of tomorrow that would become Epcot. Global Expos had pavilions dedicated to what the future would bring. There were four World Fairs over 12 years, from 1958 to 1970, each celebrating a bright, shiny white future. There wouldn’t be another for 22 years.

This fascination with the future was mirrored in our entertainment. Star Trek (pilot in 1964, series start in 1966) invited all of us to boldly go where no man had gone before, namely a future set roughly three centuries from then.   For those of us of a younger age, the Jetsons (original series from 1963 to 64) indoctrinated an entire generation into this religion of future worship. Yes, tomorrow would be wonderful – just you wait and see!

That was then – this is now. And now is a helluva lot different.

Almost no one – especially in the entertainment industry – is envisioning the future as anything else than an apocalyptic hell hole. We’ve done an about face and are grasping desperately for the past. The future went from being utopian to dystopian, seemingly in the blink of an eye. What happened?

It’s hard to nail down exactly when we went from eagerly awaiting the future to dreading it, but it appears to be sometime during the last two decades of the 20th Century. By the time the clock ticked over to the next millennium, our love affair was over. As Chuck Palahniuk, author of the 1999 novel Invisible Monsters, quipped, “When did the future go from being a promise to a threat?”

Our dread about the future might just be a fear of change. As the future we imagined in the 1960’s started playing out in real time, perhaps we realized our vision was a little too simplistic. The future came with unintended consequences, including massive societal shifts. It’s like we collectively told ourselves, “Once burned, twice shy.” Maybe it was the uncertainty of the future that scared the bejeezus out of us.

But it could also be how we got our information about the impact of science and technology on our lives. I don’t think it’s a coincidence that our fear of the future coincided with the decline of journalism. Sensationalism and endless punditry replaced real reporting just about the time we started this about face. When negative things happened, they were amplified. Fear was the natural result. We felt out of control and we keep telling ourselves that things never used to be this way.  

The sum total of all this was the spread of a recognized psychological affliction called Anticipatory Anxiety – the certainty that the future is going to bring bad things down upon us. This went from being a localized phenomenon (“my job interview tomorrow is not going to go well”) to a widespread angst (“the world is going to hell in a handbasket”). Call it Existential Anticipatory Anxiety.

Futurists are – by nature – optimists. They believe things well be better tomorrow than they are today. In the Sixties, we all leaned into the future. The opposite of this is something called Rosy Retrospection, and it often comes bundled with Anticipatory Anxiety. It is a known cognitive bias that comes with a selective memory of the past, tossing out the bad and keeping only the good parts of yesterday. It makes us yearn to return to the past, when everything was better.

That’s where we are today. It explains the worldwide swing to the right. MAGA is really a 4-letter encapsulation of Rosy Retrospection – Make America Great Again! Whether you believe that or not, it’s a message that is very much in sync with our current feelings about the future and the past.

As writer and right-leaning political commentator William F. Buckley said, “A conservative is someone who stands athwart history, yelling Stop!”

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

The Credibility Crisis

We in the western world are getting used to playing fast and loose with the truth. There is so much that is false around us – in our politics, in our media, in our day-to-day conversations – that it’s just too exhausting to hold everything to a burden of truth. Even the skeptical amongst us no longer have the cognitive bandwidth to keep searching for credible proof.

This is by design. Somewhere in the past four decades, politicians and society’s power brokers have discovered that by pandering to beliefs rather than trading in facts, you can bend to the truth to your will. Those that seek power and influence have struck paydirt in falsehoods.

In a cover story last summer in the Atlantic, journalist Anne Applebaum explains the method in the madness: “This tactic—the so-called fire hose of falsehoods—ultimately produces not outrage but nihilism. Given so many explanations, how can you know what actually happened? What if you just can’t know? If you don’t know what happened, you’re not likely to join a great movement for democracy, or to listen when anyone speaks about positive political change. Instead, you are not going to participate in any politics at all.”

As Applebaum points out, we have become a society of nihilists. We are too tired to look for evidence of meaning. There is simply too much garbage to shovel through to find it. We are pummeled by wave after wave of misinformation, struggling to keep our heads above the rising waters by clinging to the life preserver of our own beliefs. In the process, we run the risk of those beliefs becoming further and further disconnected from reality, whatever that might be. The cogs of our sensemaking machinery have become clogged with crap.

This reverses a consistent societal trend towards the truth that has been happening for the past several centuries. Since the Enlightenment of the 18th century, we have held reason and science as the compass points of our True Norh. These twin ideals were buttressed by our institutions, including our media outlets. Their goal was to spread knowledge. It is no coincidence that journalism flourished during the Enlightenment. Freedom of the press was constitutionally enshrined to ensure they had the both the right and the obligation to speak the truth.

That was then. This is now. In the U.S. institutions, including media, universities and even museums, are being overtly threatened if they don’t participate in the wilful obfuscation of objectivity that is coming from the White House. NPR and PBS, two of the most reliable news sources according to the Ad Fontes media bias chart, have been defunded by the federal government. Social media feeds are awash with AI slop. In a sea of misinformation, the truth becomes impossible to find. And – for our own sanity – we have had to learn to stop caring about that.

But here’s the thing about the truth. It gives us an unarguable common ground. It is consistent and independent from individual belief and perspective. As longtime senator Daniel Patrick Moynihan famously said, “Everyone is entitled to his own opinion, but not to his own facts.” 

When you trade in falsehoods, the ground is consistently shifting below your feet. The story is constantly changing to match the current situation and the desired outcome. There are no bearings to navigate by. Everyone had their own compass, and they’re all pointing in different directions.

The path the world is currently going down is troubling in a number of ways, but perhaps the most troubling is that it simply isn’t sustainable. Sooner or later in this sea of deliberate chaos, credibility is going to be required to convince enough people to do something they may not want to do. And if you have consistently traded away your credibility by battling the truth, good luck getting anyone to believe you.