Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

The Pursuit of Happiness

Last week, I talked about physical places where you can find happiness – places like Fremont, California, the happiest city in the US, or Finland, the happiest country in the world.

But, of course, happiness isn’t a place. It’s a state of mind. You don’t find happiness. You experience happiness. And the nature of that experience is a tough thing to nail down.

That could be why the world Happiness Survey was called “complete crap” by opinion columnist Kyle Smith back in 2017:

“These surveys depend on subjective self-reporting, not to mention eliding cultural differences. In Japan there is a cultural bias against boasting of one’s good fortune, and in East Asia the most common response, by far, is to report one’s happiness as average. In Scandinavia, meanwhile, there is immense societal pressure to tell everyone how happy you are, right up to the moment when you’re sticking your head in the oven.”

Kyle Smith, 2017

And that’s the problem with happiness. It’s kind of like quantum mechanics – the minute you try to measure it, it changes.

Do you ever remember your grandparents trying to measure their happiness? It wasn’t a thing they thought about. Sometimes they were happy, sometimes they weren’t. But they didn’t dwell on it. They had other, more pressing, matters to think about. And if you asked them to self-report their state of happiness, they’d look at you like you had just given birth to a three horned billy goat.

Maybe we think too much about happiness. Maybe we’re setting our expectations too high. A 2011 study (Mauss, Tamir, Anderson & Savino) found that the pursuit of happiness may lead to the opposite outcome, never being happy. “People who highly value happiness set happiness standards that are difficult to obtain, leading them to feel disappointed about how they feel, paradoxically decreasing their happiness the more they want it.”

This is a real problem, especially in today’s media environment. Never in our lives have we been more obsessed with the pursuit of happiness. The problem comes with how we define that happiness. If you look at how media portrays happiness, it’s a pretty self-centred concept. It’s really all about us: what we have, where we are, how we’re feeling, what we’re doing. And all that is measured against what should make us happier.

That’s where the problem of measurement raises its prickly little head. In 1971, social scientists Philip Brickman and Donald T. Campbell came up with something called the “happiness set point.” They wanted to see if major life events – both negative and positive – actually changed how happy people were. The initial study and follow ups that further explored the question found that after initial shift in happiness after major events such as lottery wins, big promotions or life-altering accidents, people gradually returned to a happiness baseline.

But more recent academic work has found that it’s not quite so simple. First of all, there’s no such thing as a universal happiness “set point.” We all have different baselines of how happy we are. Also, some of us are more apt to respond, either positively or negatively, to major life events.

There are life events that can remove the foundations of happiness – for example, losing your job, causing a significant downtown in your economic status. As I mentioned before, money may not buy happiness, but economic stability is correlated with happiness.

What can make a difference in happiness is what we spend time doing. And in this case, life events can set up the foundations of changes that can either lead to more happiness or less. Generally, anything that leads to more interaction with others generally makes us happier. Anything that leads to social withdrawal tends to make us less happy.

So maybe happiness isn’t so much about how we feel, but rather a product of what we do.

Continuing on this theme, I found a couple of interesting data visualizations by statistician Nathan Yau. The most recent one examined the things that people did at work that made them happy.

If you’re in the legal profession, I have bad news. That ranked highest for stress and low for happiness and meaningfulness. On the other end of the spectrum, hairdressers and manicurists scored high for happiness and low on stress. Construction jobs also seemed to tick the right boxes when it comes to happiness on the job.

For me, the more interesting analysis was one Yau did back in 2018. He looked at a dataset that came from asking 10,000 people what had made them happy in the past 24 hours. Then he parsed the language of those responses to look for the patterns that emerged. The two biggest categories that lead to happiness were “Achievement” and “Affection.”

From this, we start to see some common underpinnings for happiness: doing things for others, achieving the things that are important to us, spending time with our favorite people, bonding over shared experiences.

So let’s get back to the “pursuit of happiness”- something so important to Americans that they enshrined it in the Declaration of Indepedence. But, according to Stanford historian Caroline Winterer, in her 2017 TED talk, that definition of happiness is significantly different than what we currently think of. In her words, that happiness meant, “Every citizen thinking of the larger good, thinking of society, and thinking about the structures of government that would create a society that was peaceful and that would allow as many people as possible to flourish.”

When I think of happiness, that makes more sense. It also matches the other research I shared here. We seem happiest when we’re not focused on ourselves but we’re instead thinking about others. This is especially true when our happiness navel-gazing is measuring how we come up short on happiness when stacked against the unrealistic expectations set by social media.

Like too many things in our society, happiness has morphed from something good and noble into a selfish sense of entitlement.

(Image credit – Creative Commons License – https://www.flickr.com/photos/stevenanichols/2722210623)

Real Life Usually Lives Beyond The Data

There’s an intriguing little show you’ve probably never heard of on Netflix that might be worth checking out. It’s called Travelers and it’s a Canadian produced Sci-Fi show that ran from 2016 to 2018. The only face in it you’re probably recognize is Eric McCormack, the Will from Will and Grace. He also happens to be the producer of the series.

The premise is this – special operatives from the future (the “travelers”) – travel back in time to the present to prevent the collapse of society. They essential “body snatch” everyday people from our present at the exact moment of their death and use their lives as a cover to fulfill their mission.

And that’s not even the interesting part.

The real intrigue of the show comes from the everyday conflicts which come from an imperfect shoe horning of a stranger into the target’s real-world experience. The show runners do a masterful job of weaving this into their storylines: the joy of eating a hamburger, your stomach turning at the thought of drinking actual milk from a cow, calling your “wife” her real name when you haven’t called her that in all the time you’ve known her.  And it’s in this that I discovered an unexpected parallel to our current approach to marketing.

This is a bit of a detour, so bear with me.

In the future, the research team compiles as much as they can about each of the people they’re going to “borrow” for their operatives. The profiles are compiled from social media, public records and everything they can discover from the data available.

But when the “traveler” actually takes over their life, there are no end of surprises and challenges – made up of all the trivial stuff that didn’t make it into the data profile.

You probably see where I’m going with this. When we rely solely on data to try to understand our customers or prospects, there will always be surprises. You can only learn these little quirks and nuances by diving into their lives.

That’s what A.G. Lafley, CEO of Proctor and Gamble from 2000 to 2010 and then again from 20153 to 2015, knew. In a profile on Lafley which Forbes did in 2002, writer Luisa Kroll said,

“Like the monarch in Mark Twain’s A Connecticut Yankee in King Arthurs’ Court, Lafley often makes house calls incognito to find out what’s the minds of his subjects. ‘Too much time was being spent inside Procter & Gamble and not enough outside,’ says Lafley who took over during a turbulent period two years ago. ‘I am a broken record when it comes to saying, ‘We have to focus on the customer.'”

It wasn’t a bad way to run a business. Under Lafley’s guidance, P&G doubled their market cap, making them one of the 10 most valuable companies in the world.

Humans are messy and organic. Data isn’t. Data demands to be categorized, organized and columnized. When we deal with data, we necessarily have to treat it like data. And when we do that, we’re going to miss some stuff – probably a lot of stuff. And almost all of it will be the stuff of our lives, the things that drive behavior, the sparks that light our emotions.

It requires two different ways of thinking. Data sits in our prefrontal lobes, demanding the brain to be relentlessly rational. Data reduces behavior to bits and bytes, to be manipulated by algorithms into plotted trendlines and linear graphs. In fact, automation today can totally remove we humans from the process. Data and A.I. work together to pull the levers and push the buttons on our advertising strategies. We just watch the dashboard.

But there’s another way of thinking – one that skulks down in the brain’s subcortical basement, jammed in the corner between the amygdala and the ventral striatum. It’s here where we stack all the stuff that makes us human; all the quirks and emotions, all our manias and motivations. This stuff is not rational, it’s not logical, it’s just life.

That’s the stuff A.G. Lafley found when he walked out the front door of Proctor and Gamble’s headquarters in Cincinnati and into the homes of their customers. And that’s the stuff the showrunners of Travelers had the insight to include in their narratives.

It’s the stuff that can make us sensational or stupid – often at the same time.

Why Infuriating Your Customers May Not Be a Great Business Strategy

“Online, brand value is built through experience, not exposure”

First, a confession. I didn’t say this. I wish I’d said it, but it was actually said by usability legend Jakob Nielsen at a workshop he did way back in 2006. I was in the audience, and I was listening.  Intently.

But now, some 17 years later, I have to wonder if anyone else was. According to a new study from Yext that Mediapost’s Laurie Sullivan looked at, many companies are still struggling with the concept. Here’s just a few tidbits from her report:

“47% (of leads) in a Yext survey saying they were unable to make an online purchase because the website’s help section did not provide the information needed.”

“On average respondents said it takes nearly 9 hours for a typical customer service issue to be resolved. Respondents said resolution should take about 14.5 minutes.”

“42% of respondents say that help sites do not often provide the answers they look for with a first search.”

“The biggest challenge, cited by 61%, is that the help site does not understand their question.”

This isn’t rocket science, people. If you piss your customers and prospects off, they will go find one of your competitors that doesn’t piss them off. And they won’t come back.

Perhaps the issue is that businesses doing business online have a bad case of the Lake Wobegon effect. This, according to Wikipedia, is a “a natural human tendency to overestimate one’s capabilities.” It came from Garrison Keillor’s description of his fictional town in Minnesota where “all the women are strong, all the men are good-looking, and all the children are above average”

When applied to businesses, it means that they think they’re much better at customer service than they actually are. In a 2005 study titled “Closing the delivery gap”, Global consulting firm Bain & Company found that 80% of companies believe they are delivering a superior service. And yet, only 8% of customers believe that they are receiving excellent service.

I couldn’t find an update to this study but I suspect this is probably still true. It’s also true that when it comes to judging the quality of your customer service, your customer is the only one that can do it. So you should listen to them.

If you don’t listen, the price you’re paying is huge. In yet another study, Call Centre Platform Provider TCN’s second annual “Consumer Insights about Customer Service,” 66% of Americans are likely to abandon a brand after a poor customer service experience.

Yet, for many companies, customer service is at the top of their cost-cutting hit list. According to the Bureau of Labor Statistics, the projected average growth rate for all occupations from 2020 – 2030 is 8%, but when looking at customer service specifically, the estimated growth is actually -4%. In many cases, this reduced head count is due to companies either outsourcing their customer service or swapping people for technology.

This is probably not a great move.

Again, according to the TCN study, when asked what their preferred method of communication with a company’s customer service department was, number one was “talking to a live agent by phone” with 49 % choosing it. Just behind was 45% choosing an “online chat with a live agent.”

Now, granted, this is coming from a company that just happens to provide these solutions, so take it with a grain of salt, but still, this is probably not the place you should be reducing your head count.

One final example of the importance customer service, not from a study but from my own circle of influencers. My wife and I recently booked a trip with my daughter and her husband and, like everyone else in the last few years, we found we had to cancel the trip. The trip was booked through Expedia so the credits, while issued by the carrier, had to be rebooked through Expedia.

My daughter tried to rebook online and soon found that she had to talk to an Expedia Customer Service Agent. We happened to be with her when she did this. It turned out she talked to not one, but three different agents. The first flatly refused to rebook and seemed to have no idea how the system worked. The second was slightly more helpful but suggested a way to rebook that my daughter wasn’t comfortable with. The third finally got the job done. This took about 3 hours on the phone, all to do something that should have taken 2 minutes online.

I haven’t mustered up the courage to attempt to rebook my credits yet. One thing I do know – it will involve whiskey.

What are the chances that we will book another flight on Expedia?    About the same as me making the 2024 Olympic Chinese Gymnastic Team.

Actually, that might have the edge.

Older? Sure. Wiser? Debatable.

I’ve always appreciated Mediapost Editor-in-Chief Joe Mandese’s take on things. It’s usually snarky, cynical and sarcastic, all things which are firmly in my wheelhouse. He also says things I may think but wouldn’t say for the sake of political politeness.

So when Joe gets a full head of steam up, as he did in that recent post which was entitled “Peak Idiocracy?”, I set aside some time to read it. I can vicariously fling aside my Canadian reticence and enjoy a generous helping of Mandesian snarkiness. In this case, the post was a recap of Mediapost’s 2023 Marketing Politics Conference – and the depths that political advertising is sinking to in order to appeal to younger demographics. Without stealing Joe’s thunder (please read the post if you haven’t) one example involved Tiktok and mouth mash-up filters. After the panel where this case study surfaced, Joe posed a question to the panelists.

“If this is how we are electing our representative leaders, do you feel like we’ve reached peak idiocracy in the sense that we are using mouth filters and Harry Potter memes to get their messages across?”

As Joe said, it was an “old guy question.” More than that, it was a cynical, smart, sarcastic old guy question. But the fact remains, it was an old guy question. One of the panelists, DGA Digital Director Laura Carlson responded:

“I don’t think we should discount young voters’ intelligence. I think being able to have fun with the news and have fun with politics and enjoy TikTok and enjoy the platform while also engaging with issues you care about is something I wouldn’t look down on. And I think more of it is better.”

There’s something to this. Maybe a lot to this.

First, I think we have fundamentally different idea of “messaging” from generation to generation. Our generation (technically I’m a Boomer, but the label Generation Jones is a better fit) grew up with the idea that information, whether it be on TV, newspaper, magazine or radio, was delivered as a complete package. There was a scarcity of information, and this bundling of curated information was our only choice for being informed.

That’s not the case for a generation raised with the Internet and social media. Becoming aware and being informed are often decoupled. In an environment jammed with information of all types – good and bad – Information foraging strategies have had to evolve. Now, you have to somehow pierce the information filters we have all put in place in order to spark awareness. If you are successful in doing that and can generate some curiosity, you have umpteen million sources just a few keystrokes away where you can become informed.

Still, we “old guys” (and “old gals” – for the sake of consistency, I’ll use the masculine label, but I mean it in the gender-neutral way) do have a valid perspective that shouldn’t be dismissed as us just being old and grumpy. We’ve been around long enough to see how actions and consequences are correlated. We’ve seen how seemingly trivial trends can have lasting impacts, both good and bad. There is experience here that can prove instructive.

But we also must appreciate that those a few generations behind us have built their own cognitive strategies to deal with information that are probably a better match for the media environment we live in today.

So let me pose a different question. If only one generation could vote, and if everyone’s future depended on that vote, which generation would you choose to give the ballots to? Pew Research did a generational breakdown on awareness of social issues and for me, the answer is clear. I would far rather put my future in the hands of Gen Z and Millennials than in the hands of my own generation. They are more socially aware, more compassionate, more committed to solving our many existential problems and more willing to hold our governments accountable.

So, yes, political advertising might be dumbed down to TikTok level for these younger voters, but they understand how the social media game is played. I think they are savvy enough to know that a TikTok mash up is not something to build a political ideology on. They accept it for what it is, a brazen attempt to scream just a little louder than the competition for their attention; standing out from the cacophony of media intrusiveness that engulfs them. If it has to be silly to do that, so be it.

Sure, the generation of Joe Mandese and myself grew up with “real” journalism: the nightly news with Dan Rather and Tom Brokaw, 60 Minutes, The MacNeil/Lehrer Report, the New York Times, The Washington Post. We were weaned on political debates that dealt with real issues.

And for all that, our generation still put Trump in the White House. So much for the wisdom of “old guys.”

The Eternal Hatred of Interruptive Messages

Spamming and Phishing and Robocalls at Midnight
Pop ups and Autoplays and LinkedIn Requests from Salespeople

These are a few of my least favorite things

We all feel the excruciating pain of unsolicited demands on our attention. In a study of the 50 most annoying things in life of 2000 Brits by online security firm Kapersky, deleting spam email came in at number 4, behind scrubbing the bath, being trapped in voicemail hell and cleaning the oven.

Based on this study, cleanliness is actually next to spamminess.

Granted, Kapersky is a tech security firm so the results are probably biased to the digital side, but for me the results check out. As I ran down the list, I hated all the same things that were listed.

In the same study, Robocalls came in at number 10. Personally, that tops my list, especially phishing robocalls. I hate – hate – hate rushing to my phone only to hear that the IRS is going to prosecute me unless I immediately push 7 on my touchtone phone keyboard.

One, I’m Canadian. Two, go to Hell.

I spend more and more of my life trying to avoid marketers and scammers (the line between the two is often fuzzy) trying desperately to get my attention by any means possible. And it’s only going to get worse. A study just out showed that the ChatGPT AI chatbot could be a game changer for phishing, making scam emails harder to detect. And with Google’s Gmail filters already trapping 100 million phishing emails a day, that is not good news.

The marketers in my audience are probably outrunning Usain Bolt in their dash to distance themselves from spammers, but interruptive demands on our attention are on a spectrum that all share the same baseline. Any demand on our attention that we don’t ask for will annoy us. The only difference is the degree of annoyance.

Let’s look at the psychological mechanisms behind that annoyance.

There is a direct link between the parts of our brain that govern the focusing of attention and the parts that regulate our emotions. At its best, it’s called “flow” – a term coined by Mihaly Csikszentmihaly that describes a sense of full engagement and purpose. At its worst, it’s a feeling of anger and anxiety when we’re unwilling dragged away from the task at hand.

In a 2017 neurological study by Rejer and Jankowski, they found that when a participant’s cognitive processing of a task was interrupted by online ads, activity in the frontal and prefrontal cortex simply shut down while other parts of the brain significantly shifted activity, indicating a loss of focus and a downward slide in emotions.

Another study, by Edwards, Li and Lee, points the finger at something called Reactance Theory as a possible explanation. Very simply put, when something interrupts us, we perceive a loss of freedom to act as we wish and a loss of control of our environment. Again, we respond by getting angry.

It’s important to note that this negative emotional burden applies to any interruption that derails what we intend to do. It is not specific to advertising, but a lot of advertising falls into that category. It’s the nature of the interruption and our mental engagement with the task that determine the degree of negative emotion.

Take skimming through a news website, for instance. We are there to forage for information. We are not actively engaged in any specific task. And so being interrupted by an ad while in this frame of mind is minimally irritating.

But let’s imagine that a headline catches our attention, and we click to find out more. Suddenly, we’re interrupted by a pop-up or pre-roll video ad that hijacks our attention, forcing us to pause our intention and focus on irrelevant information. Our level of annoyance begins to rise quickly.

Robocalls fall into a different category of annoyance for many reasons. First, we have a conditioned response to phone calls where we hope to be rewarded by hearing from someone we know and care about. That’s what makes it so difficult to ignore a ringing phone.

Secondly, phone calls are extremely interruptive. We must literally drop whatever we’re doing to pick up a phone. When we go to all this effort only to realize we’ve been duped by an unsolicited and irrelevant call, the “red mist” starts to float over us.

You’ll note that – up to this point – I haven’t even dealt with the nature of the message. This has all been focused on the delivery of the message, which immediately puts us in a more negative mood. It doesn’t matter whether the message is about a service special for our vehicle, an opportunity to buy term life insurance or an attempt by a fictitious Nigerian prince to lighten the load of our bank account by several thousand dollars; whatever the message, we start in an irritated state simply due to the nature of the interruption.

Of course, the more nefarious the message that’s delivered, the more negative our emotional response will be. And this has a doubling down effect on any form of intrusive advertising. We learn to associate the delivery mechanism with attempts to defraud us. Any politician that depends on robocalls to raise awareness on the day before an election should ponder their ad-delivery mechanism.

Good News and Bad News about Black Swans

First, the good news. According to a new study we may be able to predict extreme catastrophic events such as earthquakes, tsunamis, massive wildfires and pandemics through machine learning and neural networks.

The problem with these “black swan” type of events (events that are very rare but have extreme consequences) is that there isn’t a lot of data that exists that we can use to predict them. The technical term for these is a “stochastic” event – they are random and are, by definition, very difficult to forecast.

Until now. According to the study’s lead author, George Karniadakis, the researchers may have found a way to give us a heads up by using machine learning to make the most out of the meagre data we do have. “The thrust is not to take every possible data and put it into the system, but to proactively look for events that will signify the rare events,” Karniadakis says. “We may not have many examples of the real event, but we may have those precursors. Through mathematics, we identify them, which together with real events will help us to train this data-hungry operator.”

This means that this science could potentially save thousands – or millions – of lives.

But – and now comes the bad news – we have to listen to it. And we have a horrible track record of doing that.  Let’s take just one black swan – COVID 19. Remember that?

Justsecurity.org is a “online forum for the rigorous analysis of security, democracy, foreign policy, and rights.” In other words, it’s their job to minimize the impact of black swans. And they put together a timeline of the US response to the COVID 19 Pandemic. Now that we know the consequences, it’s a terrifying and maddening read. Without getting into the details, it was months before the US federal government took substantive action against the pandemic, despite repeated alerts from healthcare officials and scientists. This put the U.S. behind pretty much the entire developed world in terms of minimizing the impact of the pandemic and saving lives. All the bells, whistles and sirens were screaming at full volume, but no one wanted to listen.

Why? Because there has been a systemic breakdown in what we call epistemic trust – trust in new information coming to us from what should be a trustworthy and relevant source.

I’ll look at this breakdown on two fronts – trust in government and trust in science. These two things should work together, but all too often they don’t. That was especially true in the Trump administration’s handling of the COVID 19 Pandemic.

Let’s start with trust in government. Based on a recent study across 22 countries by the OECD, on average only about half the citizens trust their government. Trust is highest in countries like Finland, Norway and Luxembourg (where only 20 to 30% of the citizens don’t trust their government) and lowest in countries such as Colombia, Latvia and Austria (where over 60% of citizens have no trust in their government).

You might notice I didn’t mention the U.S. That’s because they weren’t included in the study. But the PEW Research Center has been tracking trust in government since 1958, so let’s look at that.

The erosion of trust in the US federal government started with Lyndon Johnson, with trust in government plummeting with Nixon and Watergate. Interestingly, although separated by ideology, both Republicans and Democrats track similarly when you look at erosion of trust from Nixon through George W. Bush, with the exception being Ronald Reagan. That started to break down with Obama and started to polarize even more with Trump and Biden. Since then, the trends started going in opposite directions, but the overall trend has still been towards lower trust.

Now, let’s look at trust in science. While not as drastic as the decline of trust in government, PEW found that trust in science has also declined, especially in the last few years. Since 2020, the percentage of Americans who had no trust in science had almost doubled, from 12% in April 2020 to 22% in December, 2021.

It’s not that the science got worse in those 20 months. It’s that we didn’t want to hear what the science was telling us. The thing about epistemic trust – our willingness to trust trustworthy information – is that it varies depending on what mood we’re in. The higher our stress level, the less likely we are to accept good information at face value, especially if what it’s trying to tell us will only increase our level of stress.

Inputting new information that disrupts our system of beliefs is hard work under any circumstances. It taxes the brain. And if our brain is already overtaxed, it protects itself by locking the doors and windows that new information may sneak through and doubling down on our existing beliefs. This is what psychologists call Confirmation Bias. We only accept new information if it matches what we already believe. This is doubly true if the new information is not something we really want to hear.

The only thing that may cause us to question our beliefs is a niggling doubt, caused by information that doesn’t fit with our beliefs. But we will go out of our way to find information that does conform to our beliefs so we can ignore the information that doesn’t fit, no matter how trustworthy its source.  The explosion of misinformation that has happened on the internet and through social media has made it easier than ever to stick with our beliefs and willfully ignore information that threatens those beliefs.

The other issue in the systemic breakdown of trust may not always be the message – it might be the messenger. If science is trying to warn us about a threatening Black Swan, that warning is generally going to be delivered in one of two ways, either through a government official or through the media. And that’s probably where we have our biggest problem. Again, referring to research done by PEW, Americans distrusted journalists almost as much as government. Sixty percent of American Adults had little to no trust in journalists, and a whopping 76% had little to no trust in elected officials.

To go back to my opening line, the good news is science can warn us about Black Swan events and save lives. The bad news is, we have to pay attention to those warnings.

Otherwise, it’s just a boy calling “wolf.”

1,000,000 Words to the Wise

According to my blog, I’ve published 1152 posts since I started it back in 2004. I was 43 when I started writing these posts.

My average post is about 870 words long, so based on my somewhat admittedly limited math skills, that means I’ve written just a smidge over 1 million words in the last 18 years. If I were writing a book, that would have been 1.71 books the length of War and Peace or ten average novels.

For those of you that have been following my output for some of or all of that time, first of all, I thank you. Secondly, you’ll have noticed a slow but steady editorial drift towards existential angst. I suspect that’s a side effect of aging.

For most of us, as we age, we grapple with the nature of the universe. We worry that the problems that lie in the future might be beyond our capabilities to deal with. We fret about the burning dumpster fire we’re leaving for the next generation.

If you’re the average human, we tend to deal with this by narrowing our span of control. We zero in on achieving order with the things which we feel lie within our capabilities. For the average aging guy, this typically manifests itself in obsessions with weed-free lawns, maniacally over-organized garages or driveways free of grease spots. I aspire to achieve at least one of these things before I die.

But along with this obsessive need for order somewhere in our narrowing universe, there’s also a recognition that time is no longer an unlimited commodity for us.  For some of us, we feel we need to leave something meaningful behind. More than a few of us older dudes become obsessed with creating their magnum opus.

Take Albert Einstein, for example. In 1905, which would be known as his annus mirabilis (miracle year), Einstein produced four papers that redefined physics as we knew it. One of them was the paper on special relativity. Einstein was just 26 years old.

As stunning as his achievements were that year, they were not what he wanted to leave as his definitive legacy. He would live another 50 years, until 1955, and spent a good portion of the last half of his life chasing a Unified Field Theory that he hoped would somehow reconcile the explosion of contradiction that came with the emergence of quantum mechanics. He would never be successful in doing so.

In his 1940 essay, ‘A Mathematician’s Apology,’ G.H. Hardy asserted that mathematics was “a young man’s game” and that mathematical ability declined as one got older. By extension, conventional wisdom would have you believe that the same holds true for science — primarily the so-called ‘hard’ sciences like chemistry, biology and especially physics.

Philosophy – on the other hand – is typically a path that doesn’t reach its peak until much later in life. This is true for most of what are called the “soft” sciences, including political science, economics and sociology.

In an admittedly limited but interesting analysis, author and programmer Mark Jeffrey visualized the answer to the question: “At what age do we do our greatest work?” In things like mathematics and physics, notable contributors hit their peak in their mid 30’s. But in the fields of philosophy, literature, art and even architecture, the peak of those included came a decade or two later. As Jeffrey notes, his methodology probably skewed results to the younger side.

This really comes down to two different definitions of intelligence: pure cognitive processing power and an ability to synthesize input from the world around us and – hopefully – add some wisdom to the mix. Some disciplines need a little seasoning – a little experience and perspective. This difference in the nature of our intelligence really drives the age-old debate between hard sciences and soft sciences, as a post from Utah State University explains:.

“Hard sciences use math explicitly; they have more control over the variables and conclusions. They include physics, chemistry and astronomy. Soft sciences use the process of collecting empirical data then use the best methods possible to analyze the information. The results are more difficult to predict. They include economics, political science and sociology.”

In this explanation, you’ll notice a thread I’ve plucked at before, the latest being my last post about Elon Musk and his takeover of Twitter: hard sciences focus on complicated problems and soft sciences look at complex problems. People who are “geek smart” and good at complicated problems tend to peak earlier than those who are willing to tackle complex problems. You’re born with “smart” – but you have to accumulate “wisdom” over your life.

Now, I certainly don’t intend to put myself in the august company quoted above. My path has infinitesimally consequential compared to, say, Albert Einstein’s. But still, I think I get where Einstein was trying to get to when he became obsessed with trying to (literally) bring some order to the universe.

For myself, I have spent much of the last decade or so trying to understand the thorny entanglement of technology and human behavior. I have watched digital technology seep into every aspect of our experience.

And I’m worried. I’m worried because I think this push of technology has been powered by a cabal of those who are “geek smart” but lack the wisdom or humility to ponder the unintended consequences of what they are unleashing. If I gathered even a modicum of the type of intelligence required to warn what may lie on the path ahead, I think I have to keep doing so, even if it takes another million words – give or take.

It Should Be No Surprise that Musk is Messing Up Twitter

I have to admit – I’m somewhat bemused by all the news rolling out of Elon Musk’s V2.0 edition of Twitter. Here is just a quick round up of headlines grabbed from a Google News search last week:

Elon Musk took over a struggling business with Twitter and has quickly made it worse – CNBC

Elon Musk is Bad at This – The Atlantic

The Elon Musk (Twitter) Era Has Been a Complete Mess – Vanity Fair

Elon Musk “Straight-up Alone,” “Winging” Twitter Changes – Business Insider

To all these, I have to say, “What the Hell did you expect?”

Look, I get that Musk is on a different plane of smart from most of us. No argument there.

The same is true, I suspect, for most tech CEOs who are the original founders of their company. The issue is that the kind of smart they are is not necessarily the kind of smart you need to run a big complex corporation. If you look at the various types of intelligence, they would excel at logical-mathematical intelligence – or what I would call “geek-smart.” But this intelligence can often come at the expense of other kinds of intelligence that would be a better fit in the CEO’s role. Both interpersonal and intrapersonal intelligence immediately come to mind.

Musk is not alone. There is a bushel load of Tech CEOs who have pulled off a number of WTF moves. In his article in the Atlantic titled Silicon Valley’s Horrible Bosses, Charlie Warzel gives us a few examples ripped straight from the handbook of the “Elon Musk School of Management.” Most of them involve making hugely impactful HR decisions with little concern for the emotional impact on employees and then doubling down on mistake by choosing to communicate through Twitter.

For most of us with even a modicum of emotional intelligence, this is unimaginable. But if you’re geek-smart, it probably seems logical. Twitter is a perfect communication medium for geek-smart people – it’s one-sided, as black and white as you can get and conveniently limited to 280 characters. There is no room for emotional nuance or context on Twitter.

The disconnect in intelligence types comes in looking at the type of problems a CEO faces. I was CEO of a very small company and even at that scale, with a couple dozen employees, I spent the majority of my time dealing with HR issues. I was constantly trying to navigate my way through these thorny and perplexing issues. I did learn one thing – issues that include people, whether they be employees or customers, generally fall into the category of what is called a “complex problem.”

In 1999, an IBM manager named Dave Snowden realized that not every problem you run into when managing a corporation requires the same approach. He put together a decision-making model to help managers identify the best decision strategy for the issue they’re dealing with. He called the model Cynefin, which is the Welsh word for habitat. In the model, there are five decision domains: Clear, Complicated, Complex, Chaotic and Confusion. Cynefin is really a sense-making tool to help guide managers through problems that are complicated or complex in the hope that chaos can be avoided.

Geek Smart People are very good at complicated problems. This is the domain of the “expert” who can rapidly sift through the “known unknowns.”

Give an expert a complicated problem and they’re the perfect fit for the job. They have the ability to hone in on the relevant details and parse out the things that would distract the rest of us. Cryptography is an example of a complicated problem. So is most coding. This is the natural habitat of the tech engineer.

Tech founders initially become successful because they are very good at solving complicated problems. In fact, in our culture, they are treated like rock stars. They are celebrated for their “expertise.” Typically, this comes with a “smartest person in the room” level of smugness. They have no time for those that don’t see through the complications of the world the same way they do.

Here we run into a cognitive obstacle uncovered by political science writer Philip E. Tetlock in his 2005 book, Expert Political Judgement: How Good Is It? How Can We Know?

As Tetlock discovered, expertise in one domain doesn’t always mean success in another, especially if one domain has complicated problems and the other has complex problems.

Complex problems, like predicting the future or managing people in a massive organization, lie in the realm of “unknown unknowns.” Here, the answer is emergent. These problems are, by their very nature, unpredictable. The very toughest complex problems fall into a category I’ve talked about before: Wicked Problems. And, as Philip Tetlock discovered, experts are no better at dealing with complexity than the rest of us. In fact, in a complex scenario like predicting the future, you’d probably have just as much success with a dart throwing chimpanzee.

But it gets worse. There’s no shame in not being good at complex problems. None of us are. The problem with expertise lies not in a lack of knowledge, but in experts sticking to a cognitive style ill-suited to the task at hand: trying to apply complicated brilliance to complex situations. I call this the “everything is a nail” syndrome. When all you have is a hammer, everything looks like a nail.

Tetlock explains, “ They [experts] are just human in the end. They are dazzled by their own brilliance and hate to be wrong. Experts are led astray not by what they believe, but by how they think.”

A Geek-Smart person believes they know the answer better than anyone else because they see the world differently. They are not open to outside input. And it’s just that type of open-minded thinking that is required to wrestle with complex problems.

When you consider all that, is it any wonder that Musk is blowing up Twitter –  and not in a good way?