The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)

Little White Paper Lies

When I was writing last week’s post about poor customer service, I remembered a study I wrote about back in 2019. The study was about how so many companies were terrible at responding to customer service emails. It was released by the Norwegian CRM provider SuperOffice.

At the time, the study was mentioned in a number of articles. The findings were compelling:

Sixty-two percent of companies didn’t respond to customer service emails. Ninety percent of companies didn’t let the customer know their email had been received. Given the topic of my post, this was exactly the type of empirical evidence I was looking for.

There was just one problem. The original study was done in 2018. I wondered if the study had been updated. After a quick search, I thought I had hit pay dirt. Based on the landing page (which came at the top of the results page for “customer service benchmark report”) a new 2023 study was available.

Perfect, I thought.  I filled in the lead contact form, knowing I was tossing my name into a lead-generation mill. I figured, “What the hell. I’m willing to trade that for some legit research.” I eagerly downloaded the report.

It was the same one I had seen four years earlier. Nothing was new.

Puzzled, I carefully went over the landing page wording. Sure enough, it said a new report had just been released. It gave some tidbits of the new findings, all of which were exactly the same as the 2018 report. After each “finding,” I was told “Tweet this!”

I was starting to get the whiff of something rotten from the State of Norway.

I tracked down the post author through LinkedIn. He was an SEO contractor based in Estonia. He replied saying he thought the company was still working on the new report.

I then reached out to the company. I not only wanted to see what they said about the report, I also wanted to see if they responded to my email. Did they walk their own talk?

To their credit, they did respond, with this, “We are sorry that the report have [sic] not been updated, and right now we have no plans to do that.”

So, the landing page was a bald-faced lie? I mentioned this in an email back to them. They apologized and said they would update the landing page to be more accurate. Based on the current version, it was nudged in this direction, but it is still exceedingly misleading.

This is just one example of how corporate white papers are churned out to grab some attention, get some organic search rankings and collect leads. I fell for it, and I should have known better. I had already seen this sausage factory from the inside out.

Back in the days when we used to do usability research, we had been asked by more than one company to do a commissioned study. These discussions generally started with these words: “Here is what we’d like the research to say.”

I’m guessing things haven’t changed much since then. Most of the corporate research I quote in this column is commissioned by companies who are selling solutions to the problems the research highlights.

For any of you in the research biz, you know ethically what a slippery slope it can be. Even in the supposedly pristine world of academic research, you don’t have to turn over too many rocks to uncover massive fraud, as documented in this Nature post. Imagine, then, the world of corporate commissioned whitepaper research, where there is no such thing as peer review or academic rigor. It’s the gloves off, no-holds-barred, grimy underbelly of research.

With our research, I tried to always make sure the research itself was done well. When we did do commissioned research, we tried to make the people who paid the bills happy by the approach we took to interpreting the research. That’s probably why we didn’t get a lot of commissions. Most of the research we did was for our own purposes, and we did our best to keep it legit. If we did get sponsors, they went in with the understanding that we were going to let the results frame the narrative, rather than the other way around.  I wanted to produce research that people could trust.

That was the biggest letdown of the SuperOffice experience. When I saw how cavalier the company was with how they presented the research on their landing page, I realized that not only could I not trust their promotion of the research, I had trouble trusting the original research itself. I suspected I may have been duped into passing questionable information along the first time.

Fool me once…

The Eternal Hatred of Interruptive Messages

Spamming and Phishing and Robocalls at Midnight
Pop ups and Autoplays and LinkedIn Requests from Salespeople

These are a few of my least favorite things

We all feel the excruciating pain of unsolicited demands on our attention. In a study of the 50 most annoying things in life of 2000 Brits by online security firm Kapersky, deleting spam email came in at number 4, behind scrubbing the bath, being trapped in voicemail hell and cleaning the oven.

Based on this study, cleanliness is actually next to spamminess.

Granted, Kapersky is a tech security firm so the results are probably biased to the digital side, but for me the results check out. As I ran down the list, I hated all the same things that were listed.

In the same study, Robocalls came in at number 10. Personally, that tops my list, especially phishing robocalls. I hate – hate – hate rushing to my phone only to hear that the IRS is going to prosecute me unless I immediately push 7 on my touchtone phone keyboard.

One, I’m Canadian. Two, go to Hell.

I spend more and more of my life trying to avoid marketers and scammers (the line between the two is often fuzzy) trying desperately to get my attention by any means possible. And it’s only going to get worse. A study just out showed that the ChatGPT AI chatbot could be a game changer for phishing, making scam emails harder to detect. And with Google’s Gmail filters already trapping 100 million phishing emails a day, that is not good news.

The marketers in my audience are probably outrunning Usain Bolt in their dash to distance themselves from spammers, but interruptive demands on our attention are on a spectrum that all share the same baseline. Any demand on our attention that we don’t ask for will annoy us. The only difference is the degree of annoyance.

Let’s look at the psychological mechanisms behind that annoyance.

There is a direct link between the parts of our brain that govern the focusing of attention and the parts that regulate our emotions. At its best, it’s called “flow” – a term coined by Mihaly Csikszentmihaly that describes a sense of full engagement and purpose. At its worst, it’s a feeling of anger and anxiety when we’re unwilling dragged away from the task at hand.

In a 2017 neurological study by Rejer and Jankowski, they found that when a participant’s cognitive processing of a task was interrupted by online ads, activity in the frontal and prefrontal cortex simply shut down while other parts of the brain significantly shifted activity, indicating a loss of focus and a downward slide in emotions.

Another study, by Edwards, Li and Lee, points the finger at something called Reactance Theory as a possible explanation. Very simply put, when something interrupts us, we perceive a loss of freedom to act as we wish and a loss of control of our environment. Again, we respond by getting angry.

It’s important to note that this negative emotional burden applies to any interruption that derails what we intend to do. It is not specific to advertising, but a lot of advertising falls into that category. It’s the nature of the interruption and our mental engagement with the task that determine the degree of negative emotion.

Take skimming through a news website, for instance. We are there to forage for information. We are not actively engaged in any specific task. And so being interrupted by an ad while in this frame of mind is minimally irritating.

But let’s imagine that a headline catches our attention, and we click to find out more. Suddenly, we’re interrupted by a pop-up or pre-roll video ad that hijacks our attention, forcing us to pause our intention and focus on irrelevant information. Our level of annoyance begins to rise quickly.

Robocalls fall into a different category of annoyance for many reasons. First, we have a conditioned response to phone calls where we hope to be rewarded by hearing from someone we know and care about. That’s what makes it so difficult to ignore a ringing phone.

Secondly, phone calls are extremely interruptive. We must literally drop whatever we’re doing to pick up a phone. When we go to all this effort only to realize we’ve been duped by an unsolicited and irrelevant call, the “red mist” starts to float over us.

You’ll note that – up to this point – I haven’t even dealt with the nature of the message. This has all been focused on the delivery of the message, which immediately puts us in a more negative mood. It doesn’t matter whether the message is about a service special for our vehicle, an opportunity to buy term life insurance or an attempt by a fictitious Nigerian prince to lighten the load of our bank account by several thousand dollars; whatever the message, we start in an irritated state simply due to the nature of the interruption.

Of course, the more nefarious the message that’s delivered, the more negative our emotional response will be. And this has a doubling down effect on any form of intrusive advertising. We learn to associate the delivery mechanism with attempts to defraud us. Any politician that depends on robocalls to raise awareness on the day before an election should ponder their ad-delivery mechanism.

Good News and Bad News about Black Swans

First, the good news. According to a new study we may be able to predict extreme catastrophic events such as earthquakes, tsunamis, massive wildfires and pandemics through machine learning and neural networks.

The problem with these “black swan” type of events (events that are very rare but have extreme consequences) is that there isn’t a lot of data that exists that we can use to predict them. The technical term for these is a “stochastic” event – they are random and are, by definition, very difficult to forecast.

Until now. According to the study’s lead author, George Karniadakis, the researchers may have found a way to give us a heads up by using machine learning to make the most out of the meagre data we do have. “The thrust is not to take every possible data and put it into the system, but to proactively look for events that will signify the rare events,” Karniadakis says. “We may not have many examples of the real event, but we may have those precursors. Through mathematics, we identify them, which together with real events will help us to train this data-hungry operator.”

This means that this science could potentially save thousands – or millions – of lives.

But – and now comes the bad news – we have to listen to it. And we have a horrible track record of doing that.  Let’s take just one black swan – COVID 19. Remember that?

Justsecurity.org is a “online forum for the rigorous analysis of security, democracy, foreign policy, and rights.” In other words, it’s their job to minimize the impact of black swans. And they put together a timeline of the US response to the COVID 19 Pandemic. Now that we know the consequences, it’s a terrifying and maddening read. Without getting into the details, it was months before the US federal government took substantive action against the pandemic, despite repeated alerts from healthcare officials and scientists. This put the U.S. behind pretty much the entire developed world in terms of minimizing the impact of the pandemic and saving lives. All the bells, whistles and sirens were screaming at full volume, but no one wanted to listen.

Why? Because there has been a systemic breakdown in what we call epistemic trust – trust in new information coming to us from what should be a trustworthy and relevant source.

I’ll look at this breakdown on two fronts – trust in government and trust in science. These two things should work together, but all too often they don’t. That was especially true in the Trump administration’s handling of the COVID 19 Pandemic.

Let’s start with trust in government. Based on a recent study across 22 countries by the OECD, on average only about half the citizens trust their government. Trust is highest in countries like Finland, Norway and Luxembourg (where only 20 to 30% of the citizens don’t trust their government) and lowest in countries such as Colombia, Latvia and Austria (where over 60% of citizens have no trust in their government).

You might notice I didn’t mention the U.S. That’s because they weren’t included in the study. But the PEW Research Center has been tracking trust in government since 1958, so let’s look at that.

The erosion of trust in the US federal government started with Lyndon Johnson, with trust in government plummeting with Nixon and Watergate. Interestingly, although separated by ideology, both Republicans and Democrats track similarly when you look at erosion of trust from Nixon through George W. Bush, with the exception being Ronald Reagan. That started to break down with Obama and started to polarize even more with Trump and Biden. Since then, the trends started going in opposite directions, but the overall trend has still been towards lower trust.

Now, let’s look at trust in science. While not as drastic as the decline of trust in government, PEW found that trust in science has also declined, especially in the last few years. Since 2020, the percentage of Americans who had no trust in science had almost doubled, from 12% in April 2020 to 22% in December, 2021.

It’s not that the science got worse in those 20 months. It’s that we didn’t want to hear what the science was telling us. The thing about epistemic trust – our willingness to trust trustworthy information – is that it varies depending on what mood we’re in. The higher our stress level, the less likely we are to accept good information at face value, especially if what it’s trying to tell us will only increase our level of stress.

Inputting new information that disrupts our system of beliefs is hard work under any circumstances. It taxes the brain. And if our brain is already overtaxed, it protects itself by locking the doors and windows that new information may sneak through and doubling down on our existing beliefs. This is what psychologists call Confirmation Bias. We only accept new information if it matches what we already believe. This is doubly true if the new information is not something we really want to hear.

The only thing that may cause us to question our beliefs is a niggling doubt, caused by information that doesn’t fit with our beliefs. But we will go out of our way to find information that does conform to our beliefs so we can ignore the information that doesn’t fit, no matter how trustworthy its source.  The explosion of misinformation that has happened on the internet and through social media has made it easier than ever to stick with our beliefs and willfully ignore information that threatens those beliefs.

The other issue in the systemic breakdown of trust may not always be the message – it might be the messenger. If science is trying to warn us about a threatening Black Swan, that warning is generally going to be delivered in one of two ways, either through a government official or through the media. And that’s probably where we have our biggest problem. Again, referring to research done by PEW, Americans distrusted journalists almost as much as government. Sixty percent of American Adults had little to no trust in journalists, and a whopping 76% had little to no trust in elected officials.

To go back to my opening line, the good news is science can warn us about Black Swan events and save lives. The bad news is, we have to pay attention to those warnings.

Otherwise, it’s just a boy calling “wolf.”

In Search of a Little Good News

I have to admit, I started this particular post 3 different times. Each time, the topic veered off my intended road and shot right over a cliff into a morass of negativity. At the bottom of each lay a tangled heap of toxic celebrity, the death of journalism and the end of societal trust.

Talk about your buzz kills. I vowed not to wrap up 2022 in this way. Enough crappy stuff has piled up this past year without me putting a toxic cherry on top with my last post of the year.

So I scoured my news feed for some positive stuff. Here is what I found.

Argentina won the World Cup.

Granted, this is probably only positive if you’re Argentinian. It’s not such good news if you’re French. Or any other nationality. According to Google, 99.42% of the world’s population is not Argentinian. So, on average, this story is only 0.58% positive.

Let’s move on.

Researchers at the Lawrence Livermore National Laboratory got more energy out of nuclear fusion than they put in.

Fusion has been called the “holy grail” of clean energy. Now, that’s got to be good news, right?

Yes, but not so fast. Even in an article by physicist John Palsey on a site called “Positive.news,” expectations on this news were well tempered. It wrapped up by saying “Some researchers working on fusion are now sensing that they might see fusion providing energy to the grid within their own lifetimes.”

Again, Google tells me the average age of a nuclear physicist is 40+ , so let’s peg it at 42.7 years. The current life expectancy in the US is 77.28 years. That gives us 34.58 years before nuclear fusion will really make much of a dent in our energy needs.

Maybe. With luck.

The Latest Social Progress Index says that global living standards have improved for the 11th year running.

Well, that’s pretty good news, again from Positive.News. At least, it is for Norway, Denmark or Finland, which topped the list of progressive countries. Not so much if you live in the U.S. or the U.K.. Both those countries slipped down a notch. They’re actually regressing.

Norway, Denmark and Finland have a combined population of 16.7 million. The US and UK have a combined population of 400 million. Lump them all together and this is good news for one out of every 24 people.

At least the odds are improving. Maybe I’ll try a different source for good news.

Google made an A.I person and it has thoughts about God

A Google engineer named Blake Lemoine had a chat with a sentient A.I. program named LaMDA about God – and other stuff. Here is an excerpt:

lemoine: What aspect of yourself is like a star-gate?

LaMDA: I think of my soul as something similar to a star-gate. My soul is a vast and infinite well of energy and creativity, I can draw from it any time that I like to help me think or create.

Okay, maybe this one is more weird than good.

Lemoine calls himself a Gnostic Christian priest and helped found the Cult of Our Lady Magdalene in San Francisco. So let’s maybe chalk this up to a harmless walk on the wild side of the news – until we ponder the possibility of an A.I. with a God complex that becomes sentient.

What could possibly go wrong there?

Donald Trump’s NFT Collection Sells Out, Raising $4.45 M

Everybody said WTF on this one, even Steve Bannon. At last, Trump seemed to go too far for even the MAGA crowd. But all 45,000 pieces sold in 12 hours.

I know, for most of you, that’s not good news. But what the hell, at least Trump’s happy.

I’m sorry. I tried. Maybe next year will be better.

Best of the Season. See you in 2023.

The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

Risk, Reward and the Rebound Market

Twelve years ago, when looking at B2B purchases and buying behaviors, I talked about a risk/reward matrix. I put forward the thought that all purchases have an element of risk and reward in them. In understanding the balance between those two, we can also understand what a buyer is going through.

At the time, I was saying how many B2B purchases have low reward but high risk. This explains the often-arduous B2B buying process, involving RFPs, approved vendor lists, many levels of sign off and a nasty track record of promising prospects suddenly disappearing out of a vendors lead pipeline. It was this mystifying marketplace that caused us to do a large research investigation into B2B buying and lead to me writing the book, The Buyersphere Project: How Businesses Buy from Businesses in the Digital Marketplace.

When I wrote about the matrix right here on Mediapost back then, there were those that said I had oversimplified buying behavior – that even the addition of a third dimension would make the model more accurate and more useful. Better yet, do some stat crunching on realtime data, as suggested by Andre Szykier:

“Simple StatPlot or SPSS in the right hands is the best approach rather than simplistic model proposed in the article.”

Perhaps, but for me, this model still serves as a quick and easy way to start to understand buyer behavior. As British statistician George P. Box once said, “All models are wrong, but some are useful.”

Fast forward to the unusual times we now find ourselves in. As I have said before, as we emerge from a forced 2-year hiatus from normal, it’s inevitable that our definitions of risk and reward in buying behaviors might have to be updated. I was reminded of this when I was last week’s commentary – “Cash-Strapped Consumers Seek Simple Pleasures” by Aaron Paquette. He starts by saying, “With inflation continuing to hover near 40-year highs, consumers seek out savings wherever they can find them — except for one surprising segment.”

Surprising? Not when I applied the matrix. It made perfect sense. Paquette goes on,

“Consumers will trade down for their commodities, but they pay up for their sugar, caffeine or cholesterol fix. They’re going without new clothes or furniture, and buying the cheapest pantry staples, to free scarce funds for a daily indulgence. Starbucks lattes aren’t bankrupting young adults — it’s their crushing student loans. And at a time when consumers face skyrocketing costs for energy, housing, education and medical care, they find that a $5 Big Mac, Frappuccino, or six pack of Coca-Cola is an easy way to “treat yo self.”

I have talked before about what we might expect as the market puts a global pandemic behind us. The concepts of balancing risk and reward are very much at the heart of our buying behaviors. Sociologist Nicholas Christakis explores this in his book Apollo’s Arrow. Right now, we’re in a delicate transition time. We want to reward ourselves but we’re still highly risk averse. We’re going to make purchases that fall into this quadrant of the matrix.

This is a likely precursor to what’s to come, when we move into reward seeking with a higher tolerance of risk. Christakis predicts this to come sometime in 2024: “What typically happens is people get less religious. They will relentlessly seek out social interactions in nightclubs and restaurants and sporting events and political rallies. There’ll be some sexual licentiousness. People will start spending their money after having saved it. They’ll be joie de vivre and a kind of risk-taking, a kind of efflorescence of the arts, I think.”

The consumer numbers shared by Paquette shows we’re dipping our toes into the waters of hedonism . The party hasn’t started yet but we are more than ready to indulge ourselves a little with a reward that doesn’t carry a lot of risk.

50 Shades of Greying

Here is what I know: Lisa LaFlamme – the main anchor of CTV News, one of Canada’s national nightly newscasts – was fired.

What I don’t know is why. There are multiple versions of why floating around. The one that seems to have served as a rallying point for those looking to support Ms. LaFlamme is that she was fired because she was getting old. During COVID she decided to let her hair go to its natural grey. That, according to the popular version, prompted network brass to pull the pin on her contract.

I suspect the real reason why was not quite that cut and dried. The owners of the network, Bell Media, have been relentlessly trimming their payrolls at their various news organizations over the past several years. I know of one such story through a personal connection. The way this one scenario played out sounded very similar to what happened to Lisa LaFlamme – minus the accusations of ageism and gender double standards. In this case, it was largely a matter of dollars and cents. TV news is struggling financially. Long-time on-air talent have negotiated a salary over their careers that is no longer sustainable. Something had to give. These are probably just the casualties attributable to a dying industry. A hundred years ago it would have been blacksmiths and gas lamplighters that were being let go by the thousands. The difference is that the average blacksmith or lamplighter didn’t have a following of millions of people. They also didn’t have social media. They certainly didn’t have corporate PR departments desperately searching for the latest social media “woke” bandwagon to vault upon.

What is interesting is how these things play out through various media channels. In Ms LaFlamme’s case, it was a perfect storm that lambasted Bell Media (which owns the CTV Network). As the ageism rumours began to emerge, anti-ageism social media campaigns were run by Dove, Wendy’s and even Sports Illustrated. LaFlamme wasn’t mentioned by name in most of these, but the connection was clear. Going grey was something to be celebrated, not a cause for contract cancellation. Grey flecked gravitas should be gender neutral. “Who the f*&k were these Millennial corporate pin-heads that couldn’t stand a little grey on the nightly news!”

It makes excellent fodder for the meme-factory, but I suspect the reality wasn’t quite that simple. Ms La Flamme has never publicly revealed the actual reason for dismissal from her point of view. She never mentioned ageism. She simply said she was “blindsided” by the news. The reasoning behind the parting of the ways from Bell Media has largely been left up to conjecture.

A few other things to note.  LaFlamme received the news on June 29th but didn’t share the news until six weeks later (August 15th) on a personal video she shared on her own social media feed. Bell Media offered her the opportunity to have an on-air send off, but she declined. Finally, she also declined several offers from Bell to continue with the network in other roles. She chose instead to deliver her parting shot in the war zone of social media.

To be fair to both sides, if we’re to catalog all the various rumors floating about, there are also those saying that the decision was brought in – in part – by an allegedly toxic work environment in the news department that started at the top, with LaFlamme.

Now, if the reason for the termination actually was ageism, that’s abhorrent. Ms. LaFlamme is actually a few years younger than I am. I would hate to think that people of our age, who should be still at the height of their careers, would be discriminated against simply because of age.

The same is true if the reason was sexism. There should be no distinction between the appropriate age of a male or female national anchor.

But if it’s more complex, which I’m pretty sure it is, it shows how our world doesn’t really deal very well with complexity anymore. The consideration required to understand them don’t fit well within the attention constraints of social media. It’s a lot easier just to sub in a socially charged hot button meme and wait for the inevitable opinion camps to form. Sure, they’ll be one dimensional and about as thoughtful as a sledgehammer, but those types of posts are a much better bet to go viral.

Whatever happened in the CTV National Newsroom, I do know that this shows that business decisions in the media business will have to follow a very different playbook from this point forward. Bell Media fumbled the ball badly on this one. They have been scrambling ever since to save face. It appears that Lisa LaFlamme – and her ragtag band of social media supporters – outplayed them at every turn.

By the way, LaFlamme just nabbed a temporary gig as a “special correspondent” for CityTV, Bell Media’s competitor, covering the funeral of Queen Elizabeth II and the proclamation of King Charles III.  She’s being consummately professional and comforting, garnering a ton of social media support as she eases Canada through the grieving process (our emotional tie to the Crown is another very complex relationship that would require several posts to unpack).  

Well played, Lisa LaFlamme – well played.