The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)

The Pursuit of Happiness

Last week, I talked about physical places where you can find happiness – places like Fremont, California, the happiest city in the US, or Finland, the happiest country in the world.

But, of course, happiness isn’t a place. It’s a state of mind. You don’t find happiness. You experience happiness. And the nature of that experience is a tough thing to nail down.

That could be why the world Happiness Survey was called “complete crap” by opinion columnist Kyle Smith back in 2017:

“These surveys depend on subjective self-reporting, not to mention eliding cultural differences. In Japan there is a cultural bias against boasting of one’s good fortune, and in East Asia the most common response, by far, is to report one’s happiness as average. In Scandinavia, meanwhile, there is immense societal pressure to tell everyone how happy you are, right up to the moment when you’re sticking your head in the oven.”

Kyle Smith, 2017

And that’s the problem with happiness. It’s kind of like quantum mechanics – the minute you try to measure it, it changes.

Do you ever remember your grandparents trying to measure their happiness? It wasn’t a thing they thought about. Sometimes they were happy, sometimes they weren’t. But they didn’t dwell on it. They had other, more pressing, matters to think about. And if you asked them to self-report their state of happiness, they’d look at you like you had just given birth to a three horned billy goat.

Maybe we think too much about happiness. Maybe we’re setting our expectations too high. A 2011 study (Mauss, Tamir, Anderson & Savino) found that the pursuit of happiness may lead to the opposite outcome, never being happy. “People who highly value happiness set happiness standards that are difficult to obtain, leading them to feel disappointed about how they feel, paradoxically decreasing their happiness the more they want it.”

This is a real problem, especially in today’s media environment. Never in our lives have we been more obsessed with the pursuit of happiness. The problem comes with how we define that happiness. If you look at how media portrays happiness, it’s a pretty self-centred concept. It’s really all about us: what we have, where we are, how we’re feeling, what we’re doing. And all that is measured against what should make us happier.

That’s where the problem of measurement raises its prickly little head. In 1971, social scientists Philip Brickman and Donald T. Campbell came up with something called the “happiness set point.” They wanted to see if major life events – both negative and positive – actually changed how happy people were. The initial study and follow ups that further explored the question found that after initial shift in happiness after major events such as lottery wins, big promotions or life-altering accidents, people gradually returned to a happiness baseline.

But more recent academic work has found that it’s not quite so simple. First of all, there’s no such thing as a universal happiness “set point.” We all have different baselines of how happy we are. Also, some of us are more apt to respond, either positively or negatively, to major life events.

There are life events that can remove the foundations of happiness – for example, losing your job, causing a significant downtown in your economic status. As I mentioned before, money may not buy happiness, but economic stability is correlated with happiness.

What can make a difference in happiness is what we spend time doing. And in this case, life events can set up the foundations of changes that can either lead to more happiness or less. Generally, anything that leads to more interaction with others generally makes us happier. Anything that leads to social withdrawal tends to make us less happy.

So maybe happiness isn’t so much about how we feel, but rather a product of what we do.

Continuing on this theme, I found a couple of interesting data visualizations by statistician Nathan Yau. The most recent one examined the things that people did at work that made them happy.

If you’re in the legal profession, I have bad news. That ranked highest for stress and low for happiness and meaningfulness. On the other end of the spectrum, hairdressers and manicurists scored high for happiness and low on stress. Construction jobs also seemed to tick the right boxes when it comes to happiness on the job.

For me, the more interesting analysis was one Yau did back in 2018. He looked at a dataset that came from asking 10,000 people what had made them happy in the past 24 hours. Then he parsed the language of those responses to look for the patterns that emerged. The two biggest categories that lead to happiness were “Achievement” and “Affection.”

From this, we start to see some common underpinnings for happiness: doing things for others, achieving the things that are important to us, spending time with our favorite people, bonding over shared experiences.

So let’s get back to the “pursuit of happiness”- something so important to Americans that they enshrined it in the Declaration of Indepedence. But, according to Stanford historian Caroline Winterer, in her 2017 TED talk, that definition of happiness is significantly different than what we currently think of. In her words, that happiness meant, “Every citizen thinking of the larger good, thinking of society, and thinking about the structures of government that would create a society that was peaceful and that would allow as many people as possible to flourish.”

When I think of happiness, that makes more sense. It also matches the other research I shared here. We seem happiest when we’re not focused on ourselves but we’re instead thinking about others. This is especially true when our happiness navel-gazing is measuring how we come up short on happiness when stacked against the unrealistic expectations set by social media.

Like too many things in our society, happiness has morphed from something good and noble into a selfish sense of entitlement.

(Image credit – Creative Commons License – https://www.flickr.com/photos/stevenanichols/2722210623)

Little White Paper Lies

When I was writing last week’s post about poor customer service, I remembered a study I wrote about back in 2019. The study was about how so many companies were terrible at responding to customer service emails. It was released by the Norwegian CRM provider SuperOffice.

At the time, the study was mentioned in a number of articles. The findings were compelling:

Sixty-two percent of companies didn’t respond to customer service emails. Ninety percent of companies didn’t let the customer know their email had been received. Given the topic of my post, this was exactly the type of empirical evidence I was looking for.

There was just one problem. The original study was done in 2018. I wondered if the study had been updated. After a quick search, I thought I had hit pay dirt. Based on the landing page (which came at the top of the results page for “customer service benchmark report”) a new 2023 study was available.

Perfect, I thought.  I filled in the lead contact form, knowing I was tossing my name into a lead-generation mill. I figured, “What the hell. I’m willing to trade that for some legit research.” I eagerly downloaded the report.

It was the same one I had seen four years earlier. Nothing was new.

Puzzled, I carefully went over the landing page wording. Sure enough, it said a new report had just been released. It gave some tidbits of the new findings, all of which were exactly the same as the 2018 report. After each “finding,” I was told “Tweet this!”

I was starting to get the whiff of something rotten from the State of Norway.

I tracked down the post author through LinkedIn. He was an SEO contractor based in Estonia. He replied saying he thought the company was still working on the new report.

I then reached out to the company. I not only wanted to see what they said about the report, I also wanted to see if they responded to my email. Did they walk their own talk?

To their credit, they did respond, with this, “We are sorry that the report have [sic] not been updated, and right now we have no plans to do that.”

So, the landing page was a bald-faced lie? I mentioned this in an email back to them. They apologized and said they would update the landing page to be more accurate. Based on the current version, it was nudged in this direction, but it is still exceedingly misleading.

This is just one example of how corporate white papers are churned out to grab some attention, get some organic search rankings and collect leads. I fell for it, and I should have known better. I had already seen this sausage factory from the inside out.

Back in the days when we used to do usability research, we had been asked by more than one company to do a commissioned study. These discussions generally started with these words: “Here is what we’d like the research to say.”

I’m guessing things haven’t changed much since then. Most of the corporate research I quote in this column is commissioned by companies who are selling solutions to the problems the research highlights.

For any of you in the research biz, you know ethically what a slippery slope it can be. Even in the supposedly pristine world of academic research, you don’t have to turn over too many rocks to uncover massive fraud, as documented in this Nature post. Imagine, then, the world of corporate commissioned whitepaper research, where there is no such thing as peer review or academic rigor. It’s the gloves off, no-holds-barred, grimy underbelly of research.

With our research, I tried to always make sure the research itself was done well. When we did do commissioned research, we tried to make the people who paid the bills happy by the approach we took to interpreting the research. That’s probably why we didn’t get a lot of commissions. Most of the research we did was for our own purposes, and we did our best to keep it legit. If we did get sponsors, they went in with the understanding that we were going to let the results frame the narrative, rather than the other way around.  I wanted to produce research that people could trust.

That was the biggest letdown of the SuperOffice experience. When I saw how cavalier the company was with how they presented the research on their landing page, I realized that not only could I not trust their promotion of the research, I had trouble trusting the original research itself. I suspected I may have been duped into passing questionable information along the first time.

Fool me once…

Why Infuriating Your Customers May Not Be a Great Business Strategy

“Online, brand value is built through experience, not exposure”

First, a confession. I didn’t say this. I wish I’d said it, but it was actually said by usability legend Jakob Nielsen at a workshop he did way back in 2006. I was in the audience, and I was listening.  Intently.

But now, some 17 years later, I have to wonder if anyone else was. According to a new study from Yext that Mediapost’s Laurie Sullivan looked at, many companies are still struggling with the concept. Here’s just a few tidbits from her report:

“47% (of leads) in a Yext survey saying they were unable to make an online purchase because the website’s help section did not provide the information needed.”

“On average respondents said it takes nearly 9 hours for a typical customer service issue to be resolved. Respondents said resolution should take about 14.5 minutes.”

“42% of respondents say that help sites do not often provide the answers they look for with a first search.”

“The biggest challenge, cited by 61%, is that the help site does not understand their question.”

This isn’t rocket science, people. If you piss your customers and prospects off, they will go find one of your competitors that doesn’t piss them off. And they won’t come back.

Perhaps the issue is that businesses doing business online have a bad case of the Lake Wobegon effect. This, according to Wikipedia, is a “a natural human tendency to overestimate one’s capabilities.” It came from Garrison Keillor’s description of his fictional town in Minnesota where “all the women are strong, all the men are good-looking, and all the children are above average”

When applied to businesses, it means that they think they’re much better at customer service than they actually are. In a 2005 study titled “Closing the delivery gap”, Global consulting firm Bain & Company found that 80% of companies believe they are delivering a superior service. And yet, only 8% of customers believe that they are receiving excellent service.

I couldn’t find an update to this study but I suspect this is probably still true. It’s also true that when it comes to judging the quality of your customer service, your customer is the only one that can do it. So you should listen to them.

If you don’t listen, the price you’re paying is huge. In yet another study, Call Centre Platform Provider TCN’s second annual “Consumer Insights about Customer Service,” 66% of Americans are likely to abandon a brand after a poor customer service experience.

Yet, for many companies, customer service is at the top of their cost-cutting hit list. According to the Bureau of Labor Statistics, the projected average growth rate for all occupations from 2020 – 2030 is 8%, but when looking at customer service specifically, the estimated growth is actually -4%. In many cases, this reduced head count is due to companies either outsourcing their customer service or swapping people for technology.

This is probably not a great move.

Again, according to the TCN study, when asked what their preferred method of communication with a company’s customer service department was, number one was “talking to a live agent by phone” with 49 % choosing it. Just behind was 45% choosing an “online chat with a live agent.”

Now, granted, this is coming from a company that just happens to provide these solutions, so take it with a grain of salt, but still, this is probably not the place you should be reducing your head count.

One final example of the importance customer service, not from a study but from my own circle of influencers. My wife and I recently booked a trip with my daughter and her husband and, like everyone else in the last few years, we found we had to cancel the trip. The trip was booked through Expedia so the credits, while issued by the carrier, had to be rebooked through Expedia.

My daughter tried to rebook online and soon found that she had to talk to an Expedia Customer Service Agent. We happened to be with her when she did this. It turned out she talked to not one, but three different agents. The first flatly refused to rebook and seemed to have no idea how the system worked. The second was slightly more helpful but suggested a way to rebook that my daughter wasn’t comfortable with. The third finally got the job done. This took about 3 hours on the phone, all to do something that should have taken 2 minutes online.

I haven’t mustered up the courage to attempt to rebook my credits yet. One thing I do know – it will involve whiskey.

What are the chances that we will book another flight on Expedia?    About the same as me making the 2024 Olympic Chinese Gymnastic Team.

Actually, that might have the edge.

Older? Sure. Wiser? Debatable.

I’ve always appreciated Mediapost Editor-in-Chief Joe Mandese’s take on things. It’s usually snarky, cynical and sarcastic, all things which are firmly in my wheelhouse. He also says things I may think but wouldn’t say for the sake of political politeness.

So when Joe gets a full head of steam up, as he did in that recent post which was entitled “Peak Idiocracy?”, I set aside some time to read it. I can vicariously fling aside my Canadian reticence and enjoy a generous helping of Mandesian snarkiness. In this case, the post was a recap of Mediapost’s 2023 Marketing Politics Conference – and the depths that political advertising is sinking to in order to appeal to younger demographics. Without stealing Joe’s thunder (please read the post if you haven’t) one example involved Tiktok and mouth mash-up filters. After the panel where this case study surfaced, Joe posed a question to the panelists.

“If this is how we are electing our representative leaders, do you feel like we’ve reached peak idiocracy in the sense that we are using mouth filters and Harry Potter memes to get their messages across?”

As Joe said, it was an “old guy question.” More than that, it was a cynical, smart, sarcastic old guy question. But the fact remains, it was an old guy question. One of the panelists, DGA Digital Director Laura Carlson responded:

“I don’t think we should discount young voters’ intelligence. I think being able to have fun with the news and have fun with politics and enjoy TikTok and enjoy the platform while also engaging with issues you care about is something I wouldn’t look down on. And I think more of it is better.”

There’s something to this. Maybe a lot to this.

First, I think we have fundamentally different idea of “messaging” from generation to generation. Our generation (technically I’m a Boomer, but the label Generation Jones is a better fit) grew up with the idea that information, whether it be on TV, newspaper, magazine or radio, was delivered as a complete package. There was a scarcity of information, and this bundling of curated information was our only choice for being informed.

That’s not the case for a generation raised with the Internet and social media. Becoming aware and being informed are often decoupled. In an environment jammed with information of all types – good and bad – Information foraging strategies have had to evolve. Now, you have to somehow pierce the information filters we have all put in place in order to spark awareness. If you are successful in doing that and can generate some curiosity, you have umpteen million sources just a few keystrokes away where you can become informed.

Still, we “old guys” (and “old gals” – for the sake of consistency, I’ll use the masculine label, but I mean it in the gender-neutral way) do have a valid perspective that shouldn’t be dismissed as us just being old and grumpy. We’ve been around long enough to see how actions and consequences are correlated. We’ve seen how seemingly trivial trends can have lasting impacts, both good and bad. There is experience here that can prove instructive.

But we also must appreciate that those a few generations behind us have built their own cognitive strategies to deal with information that are probably a better match for the media environment we live in today.

So let me pose a different question. If only one generation could vote, and if everyone’s future depended on that vote, which generation would you choose to give the ballots to? Pew Research did a generational breakdown on awareness of social issues and for me, the answer is clear. I would far rather put my future in the hands of Gen Z and Millennials than in the hands of my own generation. They are more socially aware, more compassionate, more committed to solving our many existential problems and more willing to hold our governments accountable.

So, yes, political advertising might be dumbed down to TikTok level for these younger voters, but they understand how the social media game is played. I think they are savvy enough to know that a TikTok mash up is not something to build a political ideology on. They accept it for what it is, a brazen attempt to scream just a little louder than the competition for their attention; standing out from the cacophony of media intrusiveness that engulfs them. If it has to be silly to do that, so be it.

Sure, the generation of Joe Mandese and myself grew up with “real” journalism: the nightly news with Dan Rather and Tom Brokaw, 60 Minutes, The MacNeil/Lehrer Report, the New York Times, The Washington Post. We were weaned on political debates that dealt with real issues.

And for all that, our generation still put Trump in the White House. So much for the wisdom of “old guys.”

The Eternal Hatred of Interruptive Messages

Spamming and Phishing and Robocalls at Midnight
Pop ups and Autoplays and LinkedIn Requests from Salespeople

These are a few of my least favorite things

We all feel the excruciating pain of unsolicited demands on our attention. In a study of the 50 most annoying things in life of 2000 Brits by online security firm Kapersky, deleting spam email came in at number 4, behind scrubbing the bath, being trapped in voicemail hell and cleaning the oven.

Based on this study, cleanliness is actually next to spamminess.

Granted, Kapersky is a tech security firm so the results are probably biased to the digital side, but for me the results check out. As I ran down the list, I hated all the same things that were listed.

In the same study, Robocalls came in at number 10. Personally, that tops my list, especially phishing robocalls. I hate – hate – hate rushing to my phone only to hear that the IRS is going to prosecute me unless I immediately push 7 on my touchtone phone keyboard.

One, I’m Canadian. Two, go to Hell.

I spend more and more of my life trying to avoid marketers and scammers (the line between the two is often fuzzy) trying desperately to get my attention by any means possible. And it’s only going to get worse. A study just out showed that the ChatGPT AI chatbot could be a game changer for phishing, making scam emails harder to detect. And with Google’s Gmail filters already trapping 100 million phishing emails a day, that is not good news.

The marketers in my audience are probably outrunning Usain Bolt in their dash to distance themselves from spammers, but interruptive demands on our attention are on a spectrum that all share the same baseline. Any demand on our attention that we don’t ask for will annoy us. The only difference is the degree of annoyance.

Let’s look at the psychological mechanisms behind that annoyance.

There is a direct link between the parts of our brain that govern the focusing of attention and the parts that regulate our emotions. At its best, it’s called “flow” – a term coined by Mihaly Csikszentmihaly that describes a sense of full engagement and purpose. At its worst, it’s a feeling of anger and anxiety when we’re unwilling dragged away from the task at hand.

In a 2017 neurological study by Rejer and Jankowski, they found that when a participant’s cognitive processing of a task was interrupted by online ads, activity in the frontal and prefrontal cortex simply shut down while other parts of the brain significantly shifted activity, indicating a loss of focus and a downward slide in emotions.

Another study, by Edwards, Li and Lee, points the finger at something called Reactance Theory as a possible explanation. Very simply put, when something interrupts us, we perceive a loss of freedom to act as we wish and a loss of control of our environment. Again, we respond by getting angry.

It’s important to note that this negative emotional burden applies to any interruption that derails what we intend to do. It is not specific to advertising, but a lot of advertising falls into that category. It’s the nature of the interruption and our mental engagement with the task that determine the degree of negative emotion.

Take skimming through a news website, for instance. We are there to forage for information. We are not actively engaged in any specific task. And so being interrupted by an ad while in this frame of mind is minimally irritating.

But let’s imagine that a headline catches our attention, and we click to find out more. Suddenly, we’re interrupted by a pop-up or pre-roll video ad that hijacks our attention, forcing us to pause our intention and focus on irrelevant information. Our level of annoyance begins to rise quickly.

Robocalls fall into a different category of annoyance for many reasons. First, we have a conditioned response to phone calls where we hope to be rewarded by hearing from someone we know and care about. That’s what makes it so difficult to ignore a ringing phone.

Secondly, phone calls are extremely interruptive. We must literally drop whatever we’re doing to pick up a phone. When we go to all this effort only to realize we’ve been duped by an unsolicited and irrelevant call, the “red mist” starts to float over us.

You’ll note that – up to this point – I haven’t even dealt with the nature of the message. This has all been focused on the delivery of the message, which immediately puts us in a more negative mood. It doesn’t matter whether the message is about a service special for our vehicle, an opportunity to buy term life insurance or an attempt by a fictitious Nigerian prince to lighten the load of our bank account by several thousand dollars; whatever the message, we start in an irritated state simply due to the nature of the interruption.

Of course, the more nefarious the message that’s delivered, the more negative our emotional response will be. And this has a doubling down effect on any form of intrusive advertising. We learn to associate the delivery mechanism with attempts to defraud us. Any politician that depends on robocalls to raise awareness on the day before an election should ponder their ad-delivery mechanism.

Good News and Bad News about Black Swans

First, the good news. According to a new study we may be able to predict extreme catastrophic events such as earthquakes, tsunamis, massive wildfires and pandemics through machine learning and neural networks.

The problem with these “black swan” type of events (events that are very rare but have extreme consequences) is that there isn’t a lot of data that exists that we can use to predict them. The technical term for these is a “stochastic” event – they are random and are, by definition, very difficult to forecast.

Until now. According to the study’s lead author, George Karniadakis, the researchers may have found a way to give us a heads up by using machine learning to make the most out of the meagre data we do have. “The thrust is not to take every possible data and put it into the system, but to proactively look for events that will signify the rare events,” Karniadakis says. “We may not have many examples of the real event, but we may have those precursors. Through mathematics, we identify them, which together with real events will help us to train this data-hungry operator.”

This means that this science could potentially save thousands – or millions – of lives.

But – and now comes the bad news – we have to listen to it. And we have a horrible track record of doing that.  Let’s take just one black swan – COVID 19. Remember that?

Justsecurity.org is a “online forum for the rigorous analysis of security, democracy, foreign policy, and rights.” In other words, it’s their job to minimize the impact of black swans. And they put together a timeline of the US response to the COVID 19 Pandemic. Now that we know the consequences, it’s a terrifying and maddening read. Without getting into the details, it was months before the US federal government took substantive action against the pandemic, despite repeated alerts from healthcare officials and scientists. This put the U.S. behind pretty much the entire developed world in terms of minimizing the impact of the pandemic and saving lives. All the bells, whistles and sirens were screaming at full volume, but no one wanted to listen.

Why? Because there has been a systemic breakdown in what we call epistemic trust – trust in new information coming to us from what should be a trustworthy and relevant source.

I’ll look at this breakdown on two fronts – trust in government and trust in science. These two things should work together, but all too often they don’t. That was especially true in the Trump administration’s handling of the COVID 19 Pandemic.

Let’s start with trust in government. Based on a recent study across 22 countries by the OECD, on average only about half the citizens trust their government. Trust is highest in countries like Finland, Norway and Luxembourg (where only 20 to 30% of the citizens don’t trust their government) and lowest in countries such as Colombia, Latvia and Austria (where over 60% of citizens have no trust in their government).

You might notice I didn’t mention the U.S. That’s because they weren’t included in the study. But the PEW Research Center has been tracking trust in government since 1958, so let’s look at that.

The erosion of trust in the US federal government started with Lyndon Johnson, with trust in government plummeting with Nixon and Watergate. Interestingly, although separated by ideology, both Republicans and Democrats track similarly when you look at erosion of trust from Nixon through George W. Bush, with the exception being Ronald Reagan. That started to break down with Obama and started to polarize even more with Trump and Biden. Since then, the trends started going in opposite directions, but the overall trend has still been towards lower trust.

Now, let’s look at trust in science. While not as drastic as the decline of trust in government, PEW found that trust in science has also declined, especially in the last few years. Since 2020, the percentage of Americans who had no trust in science had almost doubled, from 12% in April 2020 to 22% in December, 2021.

It’s not that the science got worse in those 20 months. It’s that we didn’t want to hear what the science was telling us. The thing about epistemic trust – our willingness to trust trustworthy information – is that it varies depending on what mood we’re in. The higher our stress level, the less likely we are to accept good information at face value, especially if what it’s trying to tell us will only increase our level of stress.

Inputting new information that disrupts our system of beliefs is hard work under any circumstances. It taxes the brain. And if our brain is already overtaxed, it protects itself by locking the doors and windows that new information may sneak through and doubling down on our existing beliefs. This is what psychologists call Confirmation Bias. We only accept new information if it matches what we already believe. This is doubly true if the new information is not something we really want to hear.

The only thing that may cause us to question our beliefs is a niggling doubt, caused by information that doesn’t fit with our beliefs. But we will go out of our way to find information that does conform to our beliefs so we can ignore the information that doesn’t fit, no matter how trustworthy its source.  The explosion of misinformation that has happened on the internet and through social media has made it easier than ever to stick with our beliefs and willfully ignore information that threatens those beliefs.

The other issue in the systemic breakdown of trust may not always be the message – it might be the messenger. If science is trying to warn us about a threatening Black Swan, that warning is generally going to be delivered in one of two ways, either through a government official or through the media. And that’s probably where we have our biggest problem. Again, referring to research done by PEW, Americans distrusted journalists almost as much as government. Sixty percent of American Adults had little to no trust in journalists, and a whopping 76% had little to no trust in elected officials.

To go back to my opening line, the good news is science can warn us about Black Swan events and save lives. The bad news is, we have to pay attention to those warnings.

Otherwise, it’s just a boy calling “wolf.”

Harry, Meghan and the Curse of Celebrity

The new Netflix series on Harry and Meghan is not exactly playing out according to plan. A few weeks ago, MediaPost TV Columnist Adam Buckman talked about the series, which promised unprecedented intimate view into the lives of the wayward Royal and his partner; it’s aim being, “– to give the rest of us a full-access pass into every nook and cranny of the lives and minds of Harry and Meghan.”

Since then, reviews have been mixed. While it is (according to Netflix) their most watched documentary ever, the world seems to be responding with a collective yawn. It is certainly not turning out to be the PR boost the two were hoping for, at least based on some viewer reviews on Rotten Tomatoes. Here is just one sample: “A massive whinge fest based on a string of lies, half-truths, and distortions of reality from two of the most privileged people on the planet.”

What I found interesting in this is the complex concept of celebrity, and how it continues to evolve – or more accurately, devolve – in our culture. This is particularly true when we mix our attitudes of modern celebrity with the hoary construct of royalty.

If it does anything, I think Harry and Meghan shows how the very concept of celebrity has turned toxic and has poisoned whatever nominal value you may find in sustaining a monarchy. And, if we are going to dissect the creeping disease of celebrity, we must go to the root of the problem, the media, because our current concept of celebrity didn’t really exist before modern mass media.

We have evolved to keep an eye on those that are at the top of the societal pyramid. It was a good survival tactic to do so. Our apex figureheads – whether they be heroes or gods – served as role models; a literal case of monkey see, monkey do. But it also ensured political survival. There is a bucketload of psychology tucked up in our brains reinforcing this human trait.

In many mythologies, the line between heroes and gods was pretty fuzzy. Also, interestingly, gods were always carnal creatures. The Greek and Roman mythical gods and heroes ostensibly acted as both role models and moral cautionary tales. With great power came great hedonistic appetites.

This gradually evolved into royalty. With kings and queens, there was a very deliberate physical and societal distance kept between royalty and the average subject.  The messy bits of bad behavior that inevitably come with extreme privilege were always kept well hidden from the average subject.  It pretty much played out that way for thousands of years.

There was a yin and yang duality to this type of celebrity that evolved over time. If we trace the roots of the word notorious, we see the beginnings of this duality and get some hints of when it began to unravel.

Notorious comes from the latin notus – meaning to know. It’s current meaning, to be known for something negative, only started in the 17th century. It seems we could accept the duality of notoriety when it came to the original celebrities – our heroes and gods – but with the rise of Christianity and, later, Puritanism (which also hit its peak in the 17th century) we started a whitewash campaign on our own God’s image This had a trickle-down effect in a more strait-laced society. We held our heroes, our God, as well as our kings and queens to a higher standard. We didn’t want to think of them as carnal creatures.

Then, thanks to the media, things got a lot more complicated.

Up until the 19 century, there was really no thing as a celebrity the way we know them today. Those that care about such things generally agree that French actress Sarah Bernhardt was the first modern celebrity. She became such because she knew how to manipulate media. She was the first to get her picture in the press. She was able to tour the world, with the telegraph spreading the word before her arrival. As the 19th century drew to a close, our modern concept of celebrity as being born.

It took a while for this fascination with celebrity spilled over to monarchies. In the case of the house of Windsor (which is a made-up name. The actual name of the family was Saxe-Coburg – Gotha, a decidedly Germanic name that became problematic when England was at war with Germany in World War I) this problem came to a head rather abruptly with King Edward VIII. This was the first royal who revelled in celebrity and who tried to use the media to his advantage. The worlds of celebrity and royalty collided with his abdication in 1936.

In watching Harry and Meghan, I couldn’t help but recount the many, many collisions between celebrity and the Crown since then. The monarchy has always tried to control their image through the media and one can’t help feeling they have been hopelessly naïve in that attempt. Celebrity feeds on itself – it is the nature of the beast – and control is not an option.

Celebrity gives us the illusion of a false intimacy. We mistakenly believe we know the person who is famous, the same as we know those closest to us in our own social circle. We feel we have the right to judge them based on the distorted image we have of them that comes through the media. Somehow, we believe we know what motivates Harry and Meghan, what their ethics entail, what type of person they are.

I suppose one can’t fault Harry and Meghan for trying – yet again – to add their own narrative to the whirling pool of celebrity that surrounds them. But, if history is any indicator, it’s not really a surprise that it’s not going according to their plan.