When the News Hits Home

My, how things have changed.

My intention was to write a follow up to last week’s post about Canada’s Bill C-18 and Meta’s banning of news on Facebook. I suppose this is a follow up of sorts. But thanks to Mother Nature – that ofttimes bully – that story was pushed right out of the queue to be replaced with something far more tragic and immediate.

To me, anyway.

I live in Kelowna. Chances are you’ve heard about my home in the last few days. If you haven’t, I can tell you that when I look out my window, all I can see is thick smoke. Which may be a good thing. Last Friday, when I could see, I spent the entire evening watching West Kelowna, across Okanagan Lake from my home, burn in the path of the oncoming McDougall Creek Wildfire. As the flames would suddenly leap towards the sky, you knew that was someone’s home being ignited.

We don’t know how many homes have been lost. The fire has been too active for authorities to have the time to count. We have firefighters and first responders pouring in from around our province to help. . Our Air Quality Index is 11 on a scale of 10, as bad as it can get. Thousands are out of their home. More thousands have their things packed by the door, ready to leave at a moment’s notice. We’re one of those.

But that’s enough about the fire. This post is about our weird relationship with the news.

When something like this happens, you have a very real, very visceral need to know what’s going on. For those of us that live here in British Columbia, the news has hit home in a way we could never imagine. A few posts ago, I said it might be healthier for me to ignore the news, because it’s always alarming and very seldom relevant to me. Well, those words are now coming back to haunt me.

This disaster has thrown our reliance on Facebook for new into stark relief. This last Friday, Canada’s Transportation Minster, Pablo Rodriguez, asked Meta to reverse its current ban on news, “We’ve seen that, throughout this emergency, Canadians have not had access to the crucial information they need. So, I ask Meta to reverse its decision, allow Canadians to have access to news on their platforms.”

But there’s another dimension to this that’s a bit more subtle yet even more frightening. It goes to the heart of how we handle crisis. I think you necessarily must “zoom in,” performing some type of terrible triage in your mind to be able to imagine the unimaginable. As the winds shift the fire away from your home, there’s relief. But other homes now lie in the path of the fire. In your head, you know that, but emotionally you can’t help but feel a lift. It’s not noble, but it’s human.

So let’s “zoom out” – a lot. We’re not the only ones this is happening to. This is a global crisis. Twenty-six thousand people are evacuated on the Spanish island of Tenerife. A friend of mine, who’s an airline pilot, was one week ago volunteering to fly people out of Maui who had lost their homes in the tragic Lahaina fire.

Take a look at Nasa’s FIRMS (Fire Information for Resource Management) website, which gives a global map of all hotspots from wildfires burning. I’ve set this link to wildfire activity in the last 7 days.

Scary as hell, right?

But can we actually process that, in a way that lets us move forward and start coping with this massive issue? Is it enough to change our behaviors in the way we must to finally start addressing climate change?

In a recent article on BBC.com, Richard Fisher talks about “Construal level theory” – which says that the greater the psychological distance there is between the news and your life, the less likely it is to make you change your behavior. For me, the psychological distance between myself and climate change is roughly 1 kilometer (just over half a mile) as the crow flies. That’s how far it is from my house to the nearest evacuation alert area.

It doesn’t get much closer than that.  But will we change? Will anything change?

I’m not so sure. We’ve been through this before. Exactly 20 years ago, the Okanagan Mountain wildfire raged through Kelowna, displacing over 30,000 people and destroying 239 homes. It was a summer much like this, at the time the driest summer on record. This year, we have smashed that record, as we have many times since that fire. Once we picked up, rebuilt our homes and got back to life, nothing really changed.

And now, here we are again. Let’s hope that this time is different.

How Canada is Killing its News Industry

In Canada, an interesting game of brinkmanship is happening. To help bring you up to speed, here are the Cole’s notes:

  • Like everywhere in the world, Canada’s news outlets are starving for revenue. Advertising is drying up, as more budget moves online.
  • In an ill-advised attempt to shore up the Canadian News industry, the federal government passed bill C-18, the Online News Act, which says that Facebook, Google and other tech giants must pay news organizations when someone comes to a web story through a link on one of their platforms.
  • Meta said – basically – WTF? We’re sending you traffic. You want us to pay for that? Fine, we’ll shut off that traffic.

Back in June, Meta posted this notice:

“In order to comply with the Online News Act, we have begun the process of ending news availability in Canada. These changes start today, and will be implemented for all people accessing Facebook and Instagram in Canada over the course of the next few weeks.”

Those changes started stripping news from our social media feeds in the last few weeks. I haven’t seen one news item on my Facebook feed in the last week.

 If you’re confused, you have a lot of company north of the 49th. Logic seems to be totally missing from this particular legislative hammer toss from Justin Trudeau and his merry band of lawmakers.

If there is any logic, it may be that for many some users, they never bother to click through to the actual story. They apparently get all the news they need from doomscrolling on Facebook.

Michael Geist, the Canadian Research Chair in Internet and E-commerce law at the University of Ottawa, calls the bill a “Lose-Lose-Lose-Lose.” 

For the media outlets that this bill is supposedly protecting, Geist says, “It is difficult to overstate the harm that Bill C-18 will create for the media sector in Canada, with enormous losses that will run into the hundreds of millions of dollars.”

Oops.

Geist details how lobbyists and supporters of the bill were sure Meta was bluffing and would come to the table to negotiate when bidden to do so. A law professor from Carleton University said “I am not worried. The threats they are making, they are doing this all around the world.”

But Meta wasn’t bluffing. And why would they?  When you hold all the cards, you don’t have to bluff. Some news publishers estimate that as much as 50% of their traffic comes from these online channels. A recent study by Maru Public Opinion showed that 26% of Canadians say they get their news from social media sites. For younger age cohorts, this percentage jumps to 35%.

News publishers have now lost that traffic, with no offsetting revenue from Bill C-18 to compensate for it. For a bill that was supposed to save the Canadian news industry, this seems to be hammering nails in the coffin at an alarming rate.

Like Geist said, this is “a cautionary tale for a government that blithely ignored the warning signs, seemed to welcome a fight with the tech companies, and had no Plan B.”

If there are lessons to be learned – or, at least, points to be pondered – in this Canadian debacle, here are two to consider:

This shows that legislators, not just in Canada but around the world, have no idea of the new power dynamics in a digital economy. They still carry the quaint notion they are the power brokers within their borders. But this shows that Meta could care less about the Canadian market. We are a drop in their global revenue bucket. Not only have they not caved in when confronted with the awesome might of the Canadian government, they haven’t even bothered coming back to the table to talk. When the Liberal lawmakers decided to take on Meta, they were taking a knife to a gun fight.

Secondly, I wonder how one third of Canadians will now be informed about what’s happening in the world. With any information sources with even a shred of journalistic integrity stripped from their Facebook and Instagram feeds, who will they be listening to? In a bid for survival, Canada’s news publishers are supposedly launching a desperate campaign to “re-educate” us on how to find the news.

Yeah. We all know how successful “re-education” campaigns are.

Finally, in the irony of ironies, as they squared off against Facebook in this ill-fated battle, Canada’s Liberal government launched a new campaign asking for us to share our thoughts on a “Summer Check-In Survey.”

Their platform of choice for this campaign? Facebook.

It’s All in How You Spin It

I generally get about 100 PR pitches a week. And I’m just a guy who writes a post on tech, people and marketing now and then. I’m not a journalist. I’m not even gainfully employed by anyone. I am just one step removed — thanks to the platform  MediaPost has provided me — from “some guy” you might meet at your local coffee shop.

But still, I get 100 PR pitches a week. Desperation for coverage is the only reason I can think of for this to be so. 99.9999% of the time, they go straight to my trash basket. And the reason they do is that they’re almost never interesting. They are — well, they’re pitches for free exposure.

Now, the average pitch, even if it isn’t interesting, should at least try to match the target’s editorial interest. It should be in the strike zone, so to speak.

Let’s do a little postmortem on one I received recently. It was titled “AI in Banking.” Fair enough. I have written a few posts on AI. Specifically, I have written a few posts on my fear of AI.

I have also written about my concerns about misuse of data. When it comes to the nexus between AI and data, I would be considered more than a little pessimistic. So, something linking AI and banking did pique my interest, but not in a good way. I opened the email.

There, in the first paragraph, I read this: “AI is changing how banks provide personalized recommendations and insights based on enriched financial data offering tailored suggestions, such as optimizing spending, suggesting suitable investment opportunities, or identifying potential financial risks.”

This, for those of you not familiar with “PR-ese,” is what we in the biz call “spin.” Kellyanne Conway once called it — more euphemistically — an alternative fact.

Let me give you an example. Let’s say that during the Tour de France half the Peloton crashes and bicyclists get a nasty case of road rash. A PR person would spin that to say that “Hundreds of professional cyclists discover a new miracle instant exfoliation technique from the South of France.”

See? It’s not a lie, it’s just an alternative fact.

Let’s go on. The second paragraph of the pitch continued: “Bud, a company that specializes in data intelligence is working with major partners across the country (Goldman Sachs, HSBC, 1835i, etc.) to categorize and organize financial information and data so that users are empowered to make informed decisions and gain a deeper understanding of their financial situation.”

Ah — we’re now getting closer to the actual fact. The focus is beginning to switch from the user, empowered to make better financial decisions thanks to AI, to what is actually happening: a data marketplace being built on the backs of users for sale to corporate America.

Let’s now follow the link to Bud’s website. There, in big letters on the home page, you read:

“Turn transactional data into real-time underwriting intelligence

Bud’s AI platform and data visualizations help lenders evaluate risk, reduce losses and unlock hidden revenue potential.”

Bingo. This is not about users, at least, not beyond using them as grist in a data mill. This is about slipping a Trojan Horse into your smartphone in the form of an app and hoovering your personal data up to give big banks an intimate glimpse into not just your finances, but also your thinking about those finances. As you bare your monetary soul to this helpful “Bud,” you have established a direct pipeline to the very institutions that hold your future in their greedy little fingers. You’re giving an algorithm everything it needs to automatically deny you credit.

This was just one pitch that happened to catch my eye long enough to dig a little deeper. But it serves as a perfect illustration of why I don’t trust big data or AI in the hands of for-profit corporations.

And that will continue to be true — no matter how you PR pros spin it.

No News is Good News

I’m trying not to pay too much attention to the news. This is partly because I’m exhausted by the news, and partly because of the sad state of journalism today.

This isn’t just a “me” thing. Almost everyone I talk to says they’re trying to find coping mechanisms to deal with the news. The News industry – and its audience – has gone from being an essential part of a working democracy to something that is actually bad for you.  In an online essay from 4 years, Swiss author Rolf Dobelli equates news consumption to a bad diet:

“(translated from its original German) News is to the mind what sugar is to the body. News is appetizing, easily digestible and at the same time highly harmful. The media feeds us morsels of trivial stories, tidbits that by no means satisfy our hunger for knowledge. Unlike with books and long, well-researched, long articles, there is no saturation when consuming news. We can devour unlimited amounts of messages; they remain cheap sugar candies. As with sugar, the side effects only show up with a delay.”

Rolf Dobelli, 2019

This alarming state is due to the fact that the News (in the US) is supported by advertising, which means it has a ravenous appetite for eyeballs. Because of this, it is highly profitable to make news addictive.

This creates a state, as Dobelli points out, where even though the news is highly inflammatory, like a constantly jangling alarm bell, almost all the news we consume is irrelevant to our daily lives. While the news we watch pushes all our hot buttons, it doesn’t serve a useful purpose. In fact, it does the exact opposite: it leads to chronic mental and physical ill-being and may cause us to start ignoring the warning signs we should be paying attention to.

A study last year (McLaughlin, Gotlieb and Mills) found ties between problematic news consumption and mental ill-being. The study found that 16.5% of 1,100 people polled in an online survey showed signs of “severely problematic” news consumption, which led them to focus less on school, work and family, and contributed to an inability to sleep.

Dobelli’s essay goes even further, pointing a finger at excessive news consumption as the cause of a list of issues including cognitive errors, inhibiting deeper thinking, wasting time, killing creativity, making us more passive and even wiring our brains for addiction in a manner similar to drugs.

All these negative side effects come from chronic stress – a constant and pervasive alarmed state that excessive news consumption puts our brains into. And if you thought Dobelli’s list was scary, wait until you see the impact of chronic stress! It actually attacks the brain by releasing excessive amounts of cortisol and restricting the uptake of serotonin, which can increase inflammation, lead to depression, shrink your hippocampus and impact your memory, make it difficult to sleep and impair your ability to think rationally.

To put a new twist on an old saying, “No news is good news.”

But let’s put aside for a moment the physical and mental toll that news takes on us. Even if none of that were true, our constant diet of bad news can also lead to something known as “alarm fatigue.”

Alarm fatigue is essentially our response to the proverbial boy who calls wolf. After several false alarms, we stop paying attention. And on that one time when we should be paying attention, we are caught with our guard down.

There is one other problem with our news diet: it oversimplifies complex problems into simple sound bites. Thomas Jefferson said, “An educated citizenry is a vital requisite for our survival as a free people.” But when the news abdicates its role as an informer to pursue profit as entertainment, it is no longer educating us. It is pandering to us by stuffing bite sized opinion pieces that reinforce our beliefs – right or wrong. We are never challenged to examine our beliefs or explore the complexity of the wicked problems that confront us. Real journalism has been replaced by profitable punditry.

All this leaves us with a choice. Until the News industry cleans up its act (I’m not holding my breath), you’re likely far better off to ignore it. Or at least, ignore the profit driven platforms that are hungry for eyeballs. Stay informed by turning to books, long articles and true investigative journalism. That’s what I’m going to start doing.

Failing all that, just think about things. I understand it’s good for you.

Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

The Comedic Comeback

Public confessions are a funny thing.

No, seriously. They’re funny. At least, John Mulaney hopes they’re funny.

His latest Netflix special, Baby J, which just dropped two weeks ago is all about coming back from having his reputation hammered on social media.

John has had a tough time of late. He filled his “Covid Years” with getting divorced from his wife, Anna Marie Tendler, stumbling into an intervention, going to rehab, relapsing, going back to rehab, dating Olivia Munn – and – oh yeah – announcing he’s having a baby with Munn. All of that happening not necessarily in that order.

Mulaney opens his Neflix show with a little song and dance:

“You know what I mean!
We all quarantined!
We all went to rehab and we all got divorced,
and now our rep-u-ta-tion is different!”
“No one knows what to think! 
Hey ya! 
All the kids like Bo Burnham more!
Because he’s currently less problematic.…

Likability is a jail.”


“Likability is a jail.” Mulaney sang that with a smile on his face, but there is some grit in that line. You can almost feel it grinding in the gears of his career.

To be fair, when you build a career on likability in the era of social media, you have to accept that it’s a pretty tenuous foundation for fame. It leaves you extremely vulnerable to being publicly called out for anything that might rub against the grain of your carefully constructed brand.  And, if you are called out – or, in extreme cases – completely cancelled, you have to somehow make it all the way back from simply being accepted to being liked again.

When you think about it, it’s probably a lot easier to build your brand on being an asshole. It’s a lot lower bar to get over. I don’t think Donald Trump loses a lot of sleep over being cancelled. And – just last week –  people gathered at the Met in New York for their Gala honoring fashion icon Karl Lagerfeld, who has never apologized for being one of the biggest and most outspoken assholes in history.  

Mulaney is the latest of a long line of comedian come backs who have been hammered by the fickle fist of being “social media famous.” He is gingerly treading in the footsteps of Louis C.K., Aziz Ansari – even Chris Rock took a stab at it, and he wasn’t the one that got cancelled. That would be Will Smith, who is still trying to pick up the pieces of his career after an ill-considered incident of physical assault in front of a worldwide audience.

You probably wouldn’t be surprised to learn that there’s a playbook for coming back after being eviscerated in the public arena of social media. According to Lori Levine, CEO of the PR firm Flying Television, it requires something called an “Apology Tour.”

The timing of this is critical. According to Levine, you first have to fly under the radar for a bit, “take a certain amount of time to stay quiet, stay off social media, not engage in any press interviews.” After a period of being suitably and silently contrite, you then move to Stage Two, “Slowly return explaining that they have ‘done the work’ [and] are feeling remorseful.”

This was pretty much the playbook that Mulaney followed. The advantage, if you’re a comedian, is that the stand-up stage is the perfect platform for the “apology tour.” It has the built in advantage of being an entertainment form that thrives on making fun of yourself. That’s probably why a good portion of Netflix’s programming calendar consists of comedians lining up for their respective “apology tours.”

Comedians on the social media comeback tour are also given a helping hand in this by the emergence of the “uneasy laughter” of dark comedy over the past decade or so. While dark – or black – humor has been around decades in the form of novels or movies, it has only been in the last decade or so that stand-up comedians combined dark humor with an unflinchingly intimate look into their own personal struggles. Since the unapologetically brilliant live performance of Tig Notaro in 2012 where she talked about her recent diagnosis of breast cancer, stand-up has dared to go to places never imagined just a few years ago.

This creates the perfect environment for the “apology tour.” The whole point is to have a no holds barred discussion of where the comedian erred in judgement. Mulaney navigated this potential minefield with surefooted grace. Probably the funniest and most authentic bit was when he started riffing with a 5th grader up in the balcony at the start of the show, warning him not to “do any of the things I’m about to talk about.”  Somehow – to me – that felt more real than everything that was to follow.

If anything, Mulaney’s recent performance was a sign of our times. It was a necessary step back from public humiliation. I’m not sure it was that funny. But it was John Mulaney reclaiming some control over his public persona. He was telling us we can’t possibly do anything worst to him than he’s done to himself…

“What, are you gonna cancel John Mulaney? I’ll kill him. I almost did.”

Search and ChatGPT – You Still Can’t Get There From Here

I’m wrapping up my ChatGPTrilogy with a shout out to an old friend that will be familiar to many Mediaposters – Aaron Goldman. 13 years ago Aaron wrote a book called Everything I Know About Marketing I Learned from Google.  Just a few weeks ago, Aaron shared a post entitled “In a World of AI, is Everything I Know about Marketing (still) Learned from Google”. In it, he looked at the last chapter of the book, which he called Future-Proofing. Part of that chapter was based on a conversation Aaron and I had back in 2010 about what search might look like in the future.

Did we get it right? Well, remarkably, we got a lot more right than we got wrong, especially with the advent of Natural Language tools such as ChatGPT and virtual assistants like Siri.

We talked a lot about something I called “app-sistants”. I explained, “the idea of search as a destination is an idea whose days are numbered. The important thing won’t be search. It will be the platform and the apps that run on it. The next big thing will be the ability to seamlessly find just the right app for your intent and utilize it immediately.” In this context, “the information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

To be honest, this evolution in search has taken a lot longer than I thought back then, “Intent will be more fully supported from end to end. Right now, we have to keep our master ‘intent’ plan in place as we handle the individual tasks on the way to that intent.”

Searching for complex answers as it currently sits requires a lot of heavy lifting. In that discussion, I used the example of planning a trip.  “Imagine if there were an app that could keep my master intent in mind for the entire process. It would know what my end goal was, would be tailored to understand my personal preferences and would use search to go out and gather the required information. When we look at alignment of intent, [a shift from search to apps is] a really intriguing concept for marketers to consider.”

So, the big question is, do we have such a tool? Is it ChatGPT? I decided to give it a try and see. After feeding ChatGPT a couple of carefully crafted prompts about a trip I’d like to take to Eastern Europe someday, I decided the answer is no. We’re not quite there yet. But we’re closer.

After a couple of iterations, ChatGPT did a credible job of assembling a potential itinerary of a trip to Croatia and Slovenia. It even made me aware of some options I hadn’t run across in my previous research. But it left me hanging well short of the “app-ssistant” I was dreaming of in 2010. Essentially, I got a suggestion but all the detail work to put it into an actual trip still required me to do hundreds of searches in various places.

The problem with ChatGPT is that it gets stuck between the millions of functionality siloes – or “walled gardens” – that make up the Internet. Those “walled gardens” exist because they represent opportunities for monetization. In order for an app-ssistant to be able to multitask and make our lives easier, we need a virtual “commonage” that gets rid of some of these walls. And that’s probably the biggest reason we haven’t seen a truly useful iteration of the functionality I predicted more than a decade ago.

This conflict between capitalism and the concept of a commonage goes back at least to the Magna Carta. As England’s economy transitioned from feudalism to capitalism, enclosure saw the building of fences and the wiping out of lands held as a commonage. The actual landscape became a collection of walled gardens that the enforced property rights of each parcel and the future production value of those parcels.

This history, which played out over hundreds of years, was repeated and compressed into a few decades online. We went from the naïve idealism of a “free for all” internet in the early days to the balkanized patchwork of monetization siloes that currently make up the Web.

Right now, search engines are the closest thing we have to a commonage on the virtual landscape. Search engines like Google can pull data from within many gardens, but if we actually try to use the data, we won’t get far before we run into a wall.

To go back to the idea of trip planning, I might be able to see what it costs to fly to Rome or what the cost of accommodations in Venice is on a search engine, but I can’t book a flight or reserve a room. To do that, I have to visit an online booking site. If I’m on a search engine, I can manually navigate this transition fairly easily. But it would stop something like ChatGPT in its tracks.

When I talked to Aaron 13 years ago, I envisioned search becoming a platform that lived underneath apps which could provide more functionality to the user. But I also was skeptical about Google’s willingness to do this, as I stated in a later post here on Mediapost.  In that post, I thought that this might be an easier transition for Microsoft.

Whether it was prescience or just dumb luck, it is indeed Microsoft taking the first steps towards integrating search with ChatGPT, through its recent integration with Bing. Expedia (who also has Microsoft DNA in its genome) has also taken a shot at integrating ChatGPT in a natural language chat interface.

This flips my original forecast on its head. Rather than the data becoming common ground, it’s the chat interface that’s popping up everywhere. Rather than tearing down the walls that divide the online landscape, ChatGPT is being tacked up as window decoration on those walls.

I did try planning that same trip on both Bing and Expedia. Bing – alas – also left me well short of my imagined destination. Expedia – being a monetization site to begin with – got me a little closer, but it still didn’t seem that I could get to where I wanted to go.

I’m sorry to say search didn’t come nearly as far as I hoped it would 13 years ago. Even with ChatGPT thumbtacked onto the interface, we’re just not there yet.

(Feature Image: OpenAI Art generated from the prompt: “A Van Gogh painting of a chatbot on a visit to Croatia”)

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)