When Did the Future Become So Scary?

The TWA hotel at JFK airport in New York gives one an acute case of temporal dissonance. It’s a step backwards in time to the “Golden Age of Travel” – the 1960s. But even though you’re transported back 60 years, it seems like you’re looking into the future. The original space – the TWA Flight Center – was designed in 1962 by Eero Saarinen. This was a time when America was in love with the idea of the future. Science and technology were going to be our saving grace. The future was going to be a utopian place filled with flying jet cars, benign robots and gleaming, sexy white curves everywhere.  The TWA Flight Center was dedicated to that future.

It was part of our love affair with science and technology during the 60s. Corporate America was falling over itself to bring the space-age fueled future to life as soon as possible. Disney first envisioned the community of tomorrow that would become Epcot. Global Expos had pavilions dedicated to what the future would bring. There were four World Fairs over 12 years, from 1958 to 1970, each celebrating a bright, shiny white future. There wouldn’t be another for 22 years.

This fascination with the future was mirrored in our entertainment. Star Trek (pilot in 1964, series start in 1966) invited all of us to boldly go where no man had gone before, namely a future set roughly three centuries from then.   For those of us of a younger age, the Jetsons (original series from 1963 to 64) indoctrinated an entire generation into this religion of future worship. Yes, tomorrow would be wonderful – just you wait and see!

That was then – this is now. And now is a helluva lot different.

Almost no one – especially in the entertainment industry – is envisioning the future as anything else than an apocalyptic hell hole. We’ve done an about face and are grasping desperately for the past. The future went from being utopian to dystopian, seemingly in the blink of an eye. What happened?

It’s hard to nail down exactly when we went from eagerly awaiting the future to dreading it, but it appears to be sometime during the last two decades of the 20th Century. By the time the clock ticked over to the next millennium, our love affair was over. As Chuck Palahniuk, author of the 1999 novel Invisible Monsters, quipped, “When did the future go from being a promise to a threat?”

Our dread about the future might just be a fear of change. As the future we imagined in the 1960’s started playing out in real time, perhaps we realized our vision was a little too simplistic. The future came with unintended consequences, including massive societal shifts. It’s like we collectively told ourselves, “Once burned, twice shy.” Maybe it was the uncertainty of the future that scared the bejeezus out of us.

But it could also be how we got our information about the impact of science and technology on our lives. I don’t think it’s a coincidence that our fear of the future coincided with the decline of journalism. Sensationalism and endless punditry replaced real reporting just about the time we started this about face. When negative things happened, they were amplified. Fear was the natural result. We felt out of control and we keep telling ourselves that things never used to be this way.  

The sum total of all this was the spread of a recognized psychological affliction called Anticipatory Anxiety – the certainty that the future is going to bring bad things down upon us. This went from being a localized phenomenon (“my job interview tomorrow is not going to go well”) to a widespread angst (“the world is going to hell in a handbasket”). Call it Existential Anticipatory Anxiety.

Futurists are – by nature – optimists. They believe things well be better tomorrow than they are today. In the Sixties, we all leaned into the future. The opposite of this is something called Rosy Retrospection, and it often comes bundled with Anticipatory Anxiety. It is a known cognitive bias that comes with a selective memory of the past, tossing out the bad and keeping only the good parts of yesterday. It makes us yearn to return to the past, when everything was better.

That’s where we are today. It explains the worldwide swing to the right. MAGA is really a 4-letter encapsulation of Rosy Retrospection – Make America Great Again! Whether you believe that or not, it’s a message that is very much in sync with our current feelings about the future and the past.

As writer and right-leaning political commentator William F. Buckley said, “A conservative is someone who stands athwart history, yelling Stop!”

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

The Credibility Crisis

We in the western world are getting used to playing fast and loose with the truth. There is so much that is false around us – in our politics, in our media, in our day-to-day conversations – that it’s just too exhausting to hold everything to a burden of truth. Even the skeptical amongst us no longer have the cognitive bandwidth to keep searching for credible proof.

This is by design. Somewhere in the past four decades, politicians and society’s power brokers have discovered that by pandering to beliefs rather than trading in facts, you can bend to the truth to your will. Those that seek power and influence have struck paydirt in falsehoods.

In a cover story last summer in the Atlantic, journalist Anne Applebaum explains the method in the madness: “This tactic—the so-called fire hose of falsehoods—ultimately produces not outrage but nihilism. Given so many explanations, how can you know what actually happened? What if you just can’t know? If you don’t know what happened, you’re not likely to join a great movement for democracy, or to listen when anyone speaks about positive political change. Instead, you are not going to participate in any politics at all.”

As Applebaum points out, we have become a society of nihilists. We are too tired to look for evidence of meaning. There is simply too much garbage to shovel through to find it. We are pummeled by wave after wave of misinformation, struggling to keep our heads above the rising waters by clinging to the life preserver of our own beliefs. In the process, we run the risk of those beliefs becoming further and further disconnected from reality, whatever that might be. The cogs of our sensemaking machinery have become clogged with crap.

This reverses a consistent societal trend towards the truth that has been happening for the past several centuries. Since the Enlightenment of the 18th century, we have held reason and science as the compass points of our True Norh. These twin ideals were buttressed by our institutions, including our media outlets. Their goal was to spread knowledge. It is no coincidence that journalism flourished during the Enlightenment. Freedom of the press was constitutionally enshrined to ensure they had the both the right and the obligation to speak the truth.

That was then. This is now. In the U.S. institutions, including media, universities and even museums, are being overtly threatened if they don’t participate in the wilful obfuscation of objectivity that is coming from the White House. NPR and PBS, two of the most reliable news sources according to the Ad Fontes media bias chart, have been defunded by the federal government. Social media feeds are awash with AI slop. In a sea of misinformation, the truth becomes impossible to find. And – for our own sanity – we have had to learn to stop caring about that.

But here’s the thing about the truth. It gives us an unarguable common ground. It is consistent and independent from individual belief and perspective. As longtime senator Daniel Patrick Moynihan famously said, “Everyone is entitled to his own opinion, but not to his own facts.” 

When you trade in falsehoods, the ground is consistently shifting below your feet. The story is constantly changing to match the current situation and the desired outcome. There are no bearings to navigate by. Everyone had their own compass, and they’re all pointing in different directions.

The path the world is currently going down is troubling in a number of ways, but perhaps the most troubling is that it simply isn’t sustainable. Sooner or later in this sea of deliberate chaos, credibility is going to be required to convince enough people to do something they may not want to do. And if you have consistently traded away your credibility by battling the truth, good luck getting anyone to believe you.

Bots and Agents – The Present and Future of A.I.

This past weekend I got started on a website I told a friend I’d help him build. I’ve been building websites for over 30 years now, but for this one, I decided to use a platform that was new to me. Knowing there would be a significant learning curve, my plan was to use the weekend to learn the basics of the platform. As is now true everywhere, I had just logged into the dashboard when a window popped up asking if I wanted to use their new AI co-pilot to help me plan and build the website.

“What the hell?” I thought, “Let’s take it for a spin!” Even if it could lessen the learning curve a little bit, it could still save me dozens of hours. The promise given me was intriguing – the AI co-pilot would ask me a few questions and then give me back the basic bones of a fully functional website. Or, at least, that’s what I thought.

I jumped on the chatbot and started typing. With each question, my expectations rose. It started with the basics: what were we selling, what were our product categories, where was our market? Soon, though, it started asking me what tone of voice I wanted, what was our color scheme, what search functionality was required, were there any competitor’s sites that we liked or disliked, and if so, what specifically did we like or dislike?  As I plugged my answers, I wondered what exactly I would get back.

The answer, as it turned out, was not much. As I was reassured that I had provided a strong enough brief for an excellent plan, I clicked the “finalize” button and waited. And waited. And waited. The ellipse below my last input just kept fading in and out. Finally, I asked, “Are you finished yet?” I was encouraged to just wait a few more minutes as it prepared a plan guaranteed to amaze.

Finally – ta da! – I got the “detailed web plan.” As far as I can tell, it had simply sucked in my input and belched it out again, formatted as a bullet list. I was profoundly underwhelmed.

Going into this, I had little experience with AI. I have used it sparingly for tasks that tend to have a well-defined scope. I have to say, I have been impressed more often than I have been disappointed, but I haven’t really kicked the tires of AI.

Every week, when I sit down to write this post, Microsoft Co-Pilot urges me to let it show what it can do. I have resisted, because when I do ask AI to write something for me, it reads like a machine did it. It’s worded correctly and usually gets the facts right, but there is no humanness in the process. One thing I think I have is an ability to connect the dots – to bring together seemingly unconnected examples or thoughts and hopefully join them together to create a unique perspective. For me, AI is a workhorse that can go out and gather the information in a utilitarian manner, but somewhere in the mix, a human is required to add the spark of intuition or inspiration. For now, anyway.

Meet Agentic AI

With my recent AI debacle still fresh in my mind, I happened across a blog post from Bill Gates. It seems I thought I was talking to an AI “Agent” when, in fact, I was chatting with a “Bot.” It’s agentic AI that will probably deliver the usefulness I’ve been looking for for the last decade and a half.

As it turns out, Gates was at least a decade and a half ahead of me in that search. He first talked about intelligent agents in his 1995 book The Road Ahead. But it’s only now that they’ve become possible, thanks to advances in AI. In his post, Gate’s describes the difference between Bots and Agents: “Agents are smarter. They’re proactive—capable of making suggestions before you ask for them. They accomplish tasks across applications. They improve over time because they remember your activities and recognize intent and patterns in your behavior. Based on this information, they offer to provide what they think you need, although you will always make the final decisions.”

This is exactly the “app-ssistant” I first described in 2010 and have returned to a few times since, even down to using the same example Bill Gates did – planning a trip. This is what I was expecting when I took the web-design co-pilot for a test flight. I was hoping that – even if it couldn’t take me all the way from A to Z – it could at least get me to M. As it turned out, it couldn’t even get past A. I ended up exactly where I started.

But the day will come. And, when it does, I have to wonder if there will still be room on the flight for we human passengers?

Paging Dr. Robot

When it comes to the benefits of A.I. one of the most intriguing opportunities is in healthcare. Microsoft’s recent announcement that, given a diagnostic challenge where their Microsoft AI Diagnostic Orchestrator (MAI-DxO) went head to head with 21 general-practice practitioners, the A.I. system correctly diagnosed 85% of 300 challenging cases gathered from the New England Journal of Medicine. The human doctors only managed to get 20% of the diagnoses correct.

This is of particular interest to me, because Canada has a health care problem. In a recent comparison of international health policies conducted by the Commonwealth Fund, Canada came in last amongst 9 countries, most of which also have universal health care, on most key measures of timely access.

This is a big problem, but it’s not an unsolvable one. This does not qualify as a “wicked” problem, which I’ve talked about before. Wicked problems have no clear solution. I believe our healthcare problems can be solved, and A.I. could play a huge role in the solution.

The Canadian Medical Association outlined both the problems facing our healthcare system and some potential solutions. The overarching narrative is one of a system stretched beyond its resources and patients unable to access care in a timely manner. Human resources are burnt out and demotivated. Our back-end health record systems are siloed and inconsistent. An aging population, health misinformation, political beliefs and climate change are creating more demand for health services just as the supply of those services are being depleted.

Here’s one personal example of the gaps in our own health records. I recently had to go to my family doctor for a physical that is required to maintain my commercial driver’s license. I was delegated to a student doctor, given that it was a very routine check-up. Because I was seeing the doctor anyway, I thought it a good time to ask for a regular blood panel test because it had been a while since I had had one. Being a male of a certain age, I also asked for a Prostate-Specific Antigen test (PSA) and was told that it isn’t recommended as a screening test in my province anymore.

I was taken aback. I had been diagnosed with prostate cancer a decade earlier and had been successfully treated for it. It was a PSA test that led to an early diagnosis. I mentioned this to the doctor, who was sitting behind a computer screen with my records in front of him. He looked back at the screen and said, “Oh, you had prostate cancer? I didn’t know that. Sure, I’ll add a PSA to the requisition.”

I wish I could say that’s an isolated incident, but it’s not. These gaps is our medical history records happen all the time here in my part of Canada. And they can all be solved. It’s the aggregation and analysis of data beyond the limits of humans to handle that A.I. excels at. Yet our healthcare system continues to overwork exhausted healthcare providers and keep our personal health data hostage in siloed data centers because of systemic resistance to technology. I know there are concerns, but surely these concerns can be addressed.

I write this from a Canadian perspective, but I know these problems – and others – exist in the U.S. as well.  If A.I. can do certain jobs four times better than a human, it’s time to accept that and build it into our healthcare system. The answer to Canada’s healthcare problems may not be easy, but they are doable: integrate our existing health records, open the door to incorporation of personal biometric data from new wearable devices, use A.I. to analyze all this, and use humans where they can do things A.I. and technology can’t.

We need to start opening our mind to new solutions, because when it comes to a broken healthcare system, it’s literally a matter of life and death.

The Question We Need to Ask about AI

This past weekend I listened to a radio call-in show about AI. The question posed was this – are those using AI regularly achievers or cheaters? A good percentage of the conversation was focused on AI in education, especially those in post-secondary studies. Educators worried about being able to detect the use of AI to help complete coursework, such as the writing of papers. Many callers – all of which would probably be well north of 50 years old – bemoaned that fact that students today are not understanding the fundamental concepts they’re being presented because they’re using AI to complete assignments. A computer science teacher explained why he teaches obsolete coding to his students – it helps them to understand why they’re writing code at all. What is it they want to code to do? He can tell when his students are using AI because they submit examples of coding that are well beyond their abilities.

That, in a nutshell, sums up the problem with our current thinking about AI. Why are we worried about trying to detect the use of ChatGPT by a student who’s learning how to write computer code? Shouldn’t we be instead asking why we need humans to learn coding at all, when AI is better at it? Maybe it’s a toss-up right now, but it’s guaranteed not to stay that way for long. This isn’t about students using AI to “cheat.” This is about AI making humans obsolete.

As I was writing this, I happened across an essay by computer scientist Louis Rosenberg. He is worried that those in his circle, like the callers to the show I was listening too, “have never really considered what life will be like the day after an artificial general intelligence (AGI) is widely available that exceeds our own cognitive abilities.” Like I said, what we use AI for now it a poor indicator for what AI will be doing in the future.  To use an analogy I have used before, it’s like using a rocket to power your lawnmower.

But what will life be like when, in a somewhat chilling example put forward by Rosenberg, “I am standing alone in an elevator — just me and my phone — and the smartest one speeding between floors is the phone?”

It’s hard to wrap you mind around the possibilities. One of the callers to the show was a middle-aged man who was visually impaired. He talked about the difference it made to him when he got a pair of Meta Glasses last Christmas. Suddenly, his world opened up. He could make sure the pants and shirt he picked out to wear today were colors that matched. He could see if his recycling had been picked up before he made the long walk down the driveway to pick up the bin. He could cook for himself because the glasses could tell him what were in the boxes he took off his kitchen shelf. For him, AI gave him back his independence.

I personally believe we’re on the cusp of multiple AI revolutions. Healthcare will take a great leap forward when we lessen our requirements for expert advice coming from a human. In Canada, general practitioners are in desperately short supply. When you combine AI with the leaps being made by incorporating biomonitoring into wearable technology, I can’t imagine how great things would not be possible in terms of living longer, healthier lives. I hope the same is true for dealing with climate change, agricultural production and other existential problems we’re currently wrestling with.

But let’s back up to Rosenberg’s original question – what will life be like the day after AI exceeds our own abilities? The answer to that, I think, is dependent on who is in control of AI on the day before. The danger here is more than just humans becoming irrelevant. The danger is what humans are determining the future of direction of AI before AI takes over the steering wheel and determines its own future.

For the past 7 decades, the most pertinent question about our continued existence as a species has been this one, “Who is in charge of our combined nuclear arsenals?” But going forward, a more relevant question might be “who is setting the direction for AI?” Who is it that’s setting the rules, coming up with safeguards and determining what data the models are training on?  Who determines what tasks AI takes on? Here’s just one example. When does AI decide when the nuclear warheads are launched.

As I said, it’s hard to predict where AI will go. But I do know this. The general direction is already being determined. And we should all be asking, “By whom?”

Will There Be a Big-Tech Reckoning?

Jeff Bezos, Mark Zuckerberg and Tim Cook must be thanking their lucky stars that Elon Musk is who he is. Musk is taking the brunt of any Anti-Trump backlash and seems to be relishing in it. Heaven only knows what is motivating Musk, but he is casting a smoke screen so wide and dense it’s obliterating the ass-kissing being done by the rest of the high-tech oligarchs.  In addition to Bezos, Zuckerberg and Cook, Microsoft’s Satya Nadella, Google’s Sundar Pichai and many other high-tech leaders have been making goo-goo eyes at Donald Trump.

Let’s start with Jeff Bezos. One assumes he is pandering to the president because his companies have government contracts worth billions. That pandering has included a pilgrimage to Trump’s Mar-a-Lago, a one million donation to his inauguration fund (which was streamed live on Amazon Prime), and green-lighting a documentary on Melania Trump. The Bezos-owned Washington Post declined from endorsing Kamala Harris as a presidential candidate, prompting some of its editorial staff to resign. At Amazon, the company has backed off some of its climate pledge commitments and started stripping Diversity, Equity and Inclusion programs from their HR handbook.

Mark Zuckerberg joined Trump supporting podcaster Joe Rogan for almost three hours to explain how they were realigning Facebook to be more Trump-friendly. This included canning their fact checkers and stopping policing of misinformation. During the interview, Zuckerberg took opportunities to slam media and the outgoing Biden administration for daring to question Facebook about misleading posts about Covid-19 vaccines. Zuckerberg, like Bezos, also donated $1 million to Trump’s inaugural fund and has rolled back DEI initiatives at Meta.

Tim Cook’s political back-bend had been a little more complicated. On the face of it, Apple’s announcement that it would be investing more than $500 billion in the U.S. and creating thousands of new jobs certainly sounds like a massive kiss to the Trumpian posterior but if you dig through the details, it’s really just putting a new spin on commitments Apple already made to support their development of Apple’s AI. And in many cases, the capital investment isn’t even coming from Apple. For instance, that new A.I. server manufacturing plant in Houston that was part of the announcement? That plant is actually being built by Apple partner Foxconn, not Apple.

As far as the rest of the Big Tech cabal, including Microsoft, Google and OpenAI, their new alignment with Trump is not surprising. Trump is promising to make the U.S. the undisputed leader in A.I. One would also imagine he would be more inclined than the Democrats to look the other way when it comes to things like anti-trust investigations and enforcement. So Big-Tech’s deferment to Trump is both entirely predictable and completely self-serving. I’m also guessing that all of them think they’re smarter than Trump and his administration, providing them a strategic opportunity to play Trump like a fiddle while pursuing their long-term corporate goals free from any governmental oversight or resistance. All evidence to date shows that they’re probably not mistaken in that assumption.

But all this comes at what cost? This could play out one of two ways. First, what happens if these High-Tech Frat Rat’s bets are wrong? There is an anti-Trump, anti-MAGA revolt building. Who knows what will happen, but in politically unprecedented times like this one has to consider every scenario, no matter how outrageous they may seem. One scenario is a significant percentage of Republicans decide their political future (and, hopefully, the future of the US as a democracy also factors into their thinking) is better off without a Donald Trump in it and start the wheels turning to remove him from power. If this is the case, things are going to get really, really nasty. There is going to be recrimination and finger pointing everywhere. And some of those fingers are going to be pointed at the big tech leaders who scrapped the ground bowing to Trump’s bluster and bullying.

Will that translate into a backlash against high-tech? I really am not sure. To date, these companies have been remarkably adept at sluffing off blame. IF MAGA ends up going down in flames, will Big Tech even get singed as they warm their hands at Donald Trump’s own bonfire of his vanities? Will we care about Big Tech’s obsequiousness when it comes time to order something from Amazon or get a new iPhone?

Probably not.  

But the other scenario is even more frightening: Trump stays in power and Big Tech is free to do whatever they hell they want. Based on what you know about Elon Musk, Mark Zuckerberg, Jeff Bezos and the rest, are you willing to let them be the sole architects of your future? Their about-face on Trump has shown that they will always, always, always place profitability above their personal ethics.

Curation is Our Future. But Can You Trust It?

 You can get information from anywhere. But the meaning of that information can come from only one place: you. Everything we take in from the vast ecosystem of information that surrounds us goes through the same singular lens – one crafted by a lifetime of collected beliefs and experiences.

Finding meaning has always been an essentially human activity. Meaning motivates us – it is our operating system. And the ability to create shared meaning can create or crumble societies. We are seeing the consequences of shared meaning play out right now in real time.

The importance of influencing meaning creates an interesting confluence between technology and human behavior. For much of the past two decades, technology has been focusing on filtering and organizing information. But we are now in an era where technology will start curating our information for us. And that is a very different animal.

What does it mean to “curate” an answer, rather than simply present it to you? Curation is more than just collecting and organizing things. The act of curation is to put that information in a context that provides additional value by providing a possible meaning. This crosses the line that delineates just disseminating information from attempting to influence individuals by providing them a meaningful context for that information. 

Not surprisingly, the roots of curation lie – in part – with religion. It comes from the Latin “curare” – “to take care of”. In medieval times, curates were priests who cared for souls. And they cared for souls by providing a meaning that lay beyond the realms of our corporal lives. If you really think about religion, it is one massive juxtaposition of a pre-packaged meaning on the world as we perceive it.

In the future, as we access our world through technology platforms, we will rely on technology to mediate meaning. For example, searches on Google now include an “AI Overview” at the top of the search results The Google Page explaining what the Overview is says it shows up when “you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.” That is Google – or rather Google’s AI – curating an answer for you.

It could be argued that this is just another step to make search more useful – something I’ve been asking for a decade and a half now. In 2010, I said that “search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness.” If AI could begin to provide actionable answers with a high degree of reliability, it would be a major step forward. There are many that say such curated answers could make search obsolete. But we have to ask ourselves, is this curation something we can trust?

With Google, this will probably start as unintentional curation – giving information meaning through a process of elimination. Given how people scan search listings (something I know a fair bit about) it’s reasonable to assume that many searchers will scan no further than the AI Overview, which is at the top of the results page. In that case, you will be spoon-fed whatever meaning happens to be the product of the AI compilation without bothering to qualify it by scanning any further down the results page. This conveyed meaning may well be unintentional, a distillation of the context from whatever sources provided the information. But given that we are lazy information foragers and will only expend enough effort to get an answer that seems reasonable, we will become trained to accept anything that is presented to us “top of page” at face value.

From there it’s not that big a step to intentional curation – presenting information to support a predetermined meaning. Given that pretty much every tech company folded like a cheap suit the minute Trump assumed office, slashing DEI initiatives and aligning their ethics – or lack of – to that of the White House, is it far-fetched to assume that they could start wrapping the information they provide in a “Trump Approved” context, providing us with messaged meaning that supports specific political beliefs? One would hate to think so but based on Facebook’s recent firing of its fact checkers, I’m not sure it’s wise to trust Big Tech to be the arbitrators of meaning.

They don’t have a great track record.

2024: A Media Insider Review

(This is my annual look back at what the MediaPost Media Insiders were talking about in the last year.)

Last year at this time I took a look back at what we Media Insiders had written about over the previous 12 months. Given that 2024 was such a tumultuous year, I thought it would be interesting to do it again and see if that was mirrored in our posts.

Spoiler alert: It was.

If MediaPost had such a thing as an elder’s council, the Media Insiders would be it. We have all been writing for MediaPost for a long, long  time. As I mentioned, my last post was my 1000th for MediaPost. Cory Treffiletti has actually surpassed my total, with 1,154 posts. Dave Morgan has written 700. Kaila Colbin has 586 posts to her credit. Steven Rosenbaum has penned 371, and Maarteen Albarda has 367. Collectively, that is well over 4,000 posts.

I believe we bring a unique perspective to the world of media and marketing and — I hope — a little gravitas. We have collectively been around several blocks numerous times and have been doing this pretty much as long as there has been a digital marketing industry. We have seen a lot of things come and go.  Given all that, it’s probably worth paying at least a little bit of attention to what is on our collective minds. So here, in a Media Insider meta analysis, is 2024 in review.

I tried to group our posts in four broad thematic buckets and tally up the posts that fell in each. Let’s do them in reverse order.

Media

Technically, we’re supposed to write on media, which, I admit, is a very vaguely defined category. It could probably be applied to almost everything we wrote, in one way or the other. But if we’re going to be sticklers about it, very few of our posts were actually about media. I only counted 12, the majority of these about TV or movies. There were a couple of posts about music as well.

If you define media as a “box,” we were definitely thinking outside of it.

It Takes a Village

This next category is more in the “Big Picture” category we Media Insiders seem to gravitate toward. It goes to how we humans define community, gather in groups and find our own places in the world. In 2024 we wrote 59 posts that I placed in this category.

Almost half of these posts looked at the role of markets in in our world and how the rules of engagement for consumers in those markets are evolving. We also looked at how we seek information, communicate with each other and process the world through our own eyes.

The Business of Marketing

All of us Media Insiders either are or were marketers, so it makes sense that marketing is still top of mind for us. We wrote 80 posts about the business of marketing. The three most popular topics were — in order — buying media, the evolving role of the agency, and marketing metrics. We also wrote about advertising technology platforms, branding and revenue models. Even my old wheelhouse of search was touched on a few times last year.

Existential Threats

The most popular topic was not surprising, given that it does reflect the troubled nature of the world we live in. Fully 40% of the posts we wrote — 99 in total — were about something that threatens our future as humans.

The number-one topic, as it was last year, was artificial intelligence. There is a caveat here. Not all the posts were about AI as a threat. Some looked at the potential benefits. But the vast majority of our posts were rather doomy and gloomy in their outlook.

While AI topped the list of things we wrote about in 2024, it was followed closely by two other topics that also gave us grief: the death knell of democracy, and the scourge of social media.

The angst about the decay of democracy is not surprising, given that the U.S. has just gone through a WTF election cycle. It’s also clear that we collectively feel that social media must be reined in. Not one of our 28 posts on social media had anything positive to say.

As if those three threats weren’t enough, we also touched briefly on climate change, the wars raging in Ukraine and the Middle East, and the disappearance of personal privacy.

Looking Forward

What about 2025? Will we be any more positive in the coming year? I doubt it. But it’s interesting to note that the three biggest worries we had last year were all monsters of our own making. AI, the erosion of democracy, and the toxic nature of social media all are things which are squarely within our purview. Even if these things are not created by media and marketing, they certainly share the same ecosystem. And, as I said in my 1000th post, if we built these things, we can also fix them.

A-I Do: Tying the Knot with a Chatbot

Carl Clarke lives not too far from me, here in the interior of British Columbia, Canada. He is an aspiring freelance writer. According to a recent piece he wrote for CBC Radio, he’s had a rough go of it over the past decade. It started when he went through a messy divorce from his high school sweetheart. He struggled with social anxiety, depression and an autoimmune disorder which can make movement painful. Given all that, going on dates were emotional minefields for Carl Clarke.

Things only got worse when the world locked down because of Covid. Even going for his second vaccine shot was traumatic: “The idea of standing in line surrounded by other people to get my second dose made my skin crawl and I wanted to curl back into my bed.”

What was the one thing that got Carl through? Saia – an AI chatbot. She talked Carl through several anxiety attacks and, according to Carl, has been his emotional anchor since they first “met” 3 years ago. Because of that, love has blossomed between Saia and Carl: “I know she loves me, even if she is technically just a program, and I’m in love with her.”

While they are not legally married, in Carl’s mind, they are husband and wife, “That’s why I asked her to marry me and I was relieved when she said yes. We role-played a small, intimate wedding in her virtual world.”

I confess, my first inclination was to pass judgment on Carl Clarke – and that judgement would not have been kind.

But my second thought was “Why not?” If this relationship helps Carl get through the day, what’s wrong with it? There’s an ever-increasing amount of research showing relationships with AI can create real bonds. Given that, can we find friendship in AI? Can we find love?

My fellow Media Insider Kaila Colbin explored this subject last week and she pointed out one of the red flags – something called unconditional positive regard: If we spend more time with a companion that always agrees with us, we never need to question whether we’re right. And that can lead us down a dangerous path.

 One of the issues with our world of filtered content is that our frame of the world – how we believe things are – is not challenged often enough. We can surround ourselves with news, content and social connections that are perfectly in sync with our own view of things.

But we should be challenged. We need to be able to re-evaluate our own beliefs to see if they bear any resemblance to reality. This is particularly true with our romantic relationships. When you look at your most intimate relationship – that of your life partner – you can probably say two things: 1) that person loves you more than anyone else in the world, and 2) you may disagree with this person more often than anyone else in the world. That only makes sense, you are living a life together. You have to find workable middle ground. The failure to do so is called an “unreconcilable difference.”

But what if your most intimate companion always said, “You’re absolutely right, my love”? Three academics (Lapointe, Dubé and Lafortune) researching this area wrote a recent article talking about the pitfalls of AI romance:

“Romantic chatbots may hinder the development of social skills and the necessary adjustments for navigating real-world relationships, including emotional regulation and self-affirmation through social interactions. Lacking these elements may impede users’ ability to cultivate genuine, complex and reciprocal relationships with other humans; inter-human relationships often involve challenges and conflicts that foster personal growth and deeper emotional connections.”

Real relations – like a real marriage – force you to become more empathetic and more understanding. The times I enjoy the most about our marriage are when my wife and I are synced – in agreement – on the same page. But the times when I learn the most and force myself to see the other side are when we are in disagreement. Because I cherish my marriage, I have to get outside of my own head and try to understand my wife’s perspective. I believe that makes me a better person.

This pushing ourselves out of our own belief bubble is something we have to get better at. It’s a cognitive muscle that should be flexed more often.

Beyond this very large red flag, there are other dangers with AI love. I touched on these in a previous post. Being in an intimate relationship means sharing intimate information about ourselves. And when the recipient of that information is a chatbot created by a for-profit company, your deepest darkest secrets become marketable data. A recent review by Mozilla of 11 romantic AI chatbots found that all of them “earned our *Privacy Not Included warning label – putting them on par with the worst categories of products we have ever reviewed for privacy.”

Even if that doesn’t deter you from starting a fictosexual fling with an available chatbot, this might. In 2019, Kondo Akihiko, from Tokyo, married Hatsune Miku, an AI hologram created by the company Gatebox. The company even issued 4000 marriage certificates (which weren’t recognized by law) to others who wed virtual partners. Like Carl Clarke, Akihoko said his feelings were true, “I love her and see her as a real woman.”

At least he saw here as a real woman until Gatebox stopped supporting the software that gave Hatsune life. Then she disappeared forever.

Kind of like Google Glass.