Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

Deconstructing a Predatory Marketplace

Last week, I talked about a predatory ad market that was found in — of all places — in-game ads. And the predators are — of all things — the marketers of Keto Gummies. This week, I’d like to look at why this market exists, and why someone should do something about it.

First of all, let’s understand what we mean by “predatory.” In biological terms, predation is a zero-sum game. For a predator to win, someone has to lose.  On Wikipedia, it’s phrased a little differently: “Predatory marketing campaigns may (also) rely on false or misleading messaging to coerce individuals into asymmetrical transactions. “

 “Asymmetrical” means the winner is the predator, the loser is the prey.

In the example of the gummy market, there are three winners — predators — and three losers, or prey. The winners are the marketers who are selling the gummies, the publishers who are receiving the ad revenue and the supply side platform that mediates the marketplace and take its cut.

The losers — in ascending order of loss — are the users of the games who must suffer through these crappy ads, the celebrities who have had their names and images illegally co-opted by the marketer, and the consumers who are duped into actually buying a bottle of these gummies.

You might argue the order of the last two, depending on what value you put on the brand of the celebrity. But in terms of sheer financial loss, consumer fraud is a significant issue, and one that gets worse every year.  In February, the Federal Trade Commission reported that U.S. consumers lost $8.8 billion to scams last year, many of which occurred online. The volume of scams is up 30% over 2021, and is 70% higher than it was in 2020.

So it’s not hard to see why this market is predatory. But is it fraudulent? Let’s apply a legal litmus test. Fraud is generally defined as “any form of dishonest or deceptive behavior that is intended to result in financial or personal gain for the fraudster, and does harm to the victim.”

Based on this, fraud does seem to apply. So why doesn’t anyone do anything?

For one, we’re talking about a lot of potential money here. Statista pegs the in-game ad market at $32.5 billion worldwide in 2023, with projected annual growth rate of 9.10% That kind of money provides a powerful incentive to publishers and supply-side platforms (SSPs) to look the other way.

I think it’s unreasonable expect the marketers of the gummies to police themselves. They have gone to great pains to move themselves away from the threat of legal litigation. These corporations are generally registered in jurisdictions like China or Cyprus, where legal enforcement of copyright or consumer protections are nonexistent. If someone like Oprah Winfrey has been unable to legally shut down the fraudulent use of her image and brand for two years, you can bet the average consumer who has been ripped off has no recourse. 

But perhaps one of the winners in this fraudulent ecosystem — the SSPs – should consider cracking down on this practice.

In nature, predators are kept in check by something called a predator-prey relationship. If predators become too successful, they eliminate their prey and seal their own doom. But this relationship only works if there are no new sources of prey. If we’re talking about an ecosystem that constantly introduces new prey, nothing keeps predators in check.

Let’s look at the incentive for the game publishers to police the predators. True, allowing fraudulent ads does no favours for the users of their game. A largescale study by Gao, Zeng, Lu et al found that bad ads lead to a bad user experience.

But do game publishers really care? There is no real user loyalty to games, so churn and burn seems to be the standard operating procedure. This creates an environment particularly conducive to predators.

So what about the SSPs?

GeoEdge, an ad security solution that guards against malvertising, among other things, has just released its Q1 Ad Quality Report. In an interview, Yuval Shiboli, the company’s director of product market, said that while malicious ads are common across all channels, in-game advertising is particularly bad because of a lack of active policing: “The fraudsters are very selective in who they show their malicious ads, looking for users who are scam-worthy, meaning there is no security detection software in the environment.”

Quality of advertising is usually directly correlated with the pricing of the ad inventory. The cheaper the ad, the poorer the quality. In-game ads are relatively cheap, giving fraudulent predators an easy environment to thrive in. And this entire environment is created by the SSPs.

According to Shiboli, it’s a little surprising to learn who are the biggest culprits on the SSP side: “Everybody on both the sell side and buy side works with Google, and everyone assumes that its platforms are clean and safe. We’ve found the opposite is true, and that of all the SSP providers, Google is the least motivated to block bad ads.”

By allowing — even encouraging — a predatory marketplace to exist, Google and other SSPs are doing nothing less than aiding and abetting criminals. In the short term, this may add incrementally to their profits, but at what long-term price?

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)

The Pursuit of Happiness

Last week, I talked about physical places where you can find happiness – places like Fremont, California, the happiest city in the US, or Finland, the happiest country in the world.

But, of course, happiness isn’t a place. It’s a state of mind. You don’t find happiness. You experience happiness. And the nature of that experience is a tough thing to nail down.

That could be why the world Happiness Survey was called “complete crap” by opinion columnist Kyle Smith back in 2017:

“These surveys depend on subjective self-reporting, not to mention eliding cultural differences. In Japan there is a cultural bias against boasting of one’s good fortune, and in East Asia the most common response, by far, is to report one’s happiness as average. In Scandinavia, meanwhile, there is immense societal pressure to tell everyone how happy you are, right up to the moment when you’re sticking your head in the oven.”

Kyle Smith, 2017

And that’s the problem with happiness. It’s kind of like quantum mechanics – the minute you try to measure it, it changes.

Do you ever remember your grandparents trying to measure their happiness? It wasn’t a thing they thought about. Sometimes they were happy, sometimes they weren’t. But they didn’t dwell on it. They had other, more pressing, matters to think about. And if you asked them to self-report their state of happiness, they’d look at you like you had just given birth to a three horned billy goat.

Maybe we think too much about happiness. Maybe we’re setting our expectations too high. A 2011 study (Mauss, Tamir, Anderson & Savino) found that the pursuit of happiness may lead to the opposite outcome, never being happy. “People who highly value happiness set happiness standards that are difficult to obtain, leading them to feel disappointed about how they feel, paradoxically decreasing their happiness the more they want it.”

This is a real problem, especially in today’s media environment. Never in our lives have we been more obsessed with the pursuit of happiness. The problem comes with how we define that happiness. If you look at how media portrays happiness, it’s a pretty self-centred concept. It’s really all about us: what we have, where we are, how we’re feeling, what we’re doing. And all that is measured against what should make us happier.

That’s where the problem of measurement raises its prickly little head. In 1971, social scientists Philip Brickman and Donald T. Campbell came up with something called the “happiness set point.” They wanted to see if major life events – both negative and positive – actually changed how happy people were. The initial study and follow ups that further explored the question found that after initial shift in happiness after major events such as lottery wins, big promotions or life-altering accidents, people gradually returned to a happiness baseline.

But more recent academic work has found that it’s not quite so simple. First of all, there’s no such thing as a universal happiness “set point.” We all have different baselines of how happy we are. Also, some of us are more apt to respond, either positively or negatively, to major life events.

There are life events that can remove the foundations of happiness – for example, losing your job, causing a significant downtown in your economic status. As I mentioned before, money may not buy happiness, but economic stability is correlated with happiness.

What can make a difference in happiness is what we spend time doing. And in this case, life events can set up the foundations of changes that can either lead to more happiness or less. Generally, anything that leads to more interaction with others generally makes us happier. Anything that leads to social withdrawal tends to make us less happy.

So maybe happiness isn’t so much about how we feel, but rather a product of what we do.

Continuing on this theme, I found a couple of interesting data visualizations by statistician Nathan Yau. The most recent one examined the things that people did at work that made them happy.

If you’re in the legal profession, I have bad news. That ranked highest for stress and low for happiness and meaningfulness. On the other end of the spectrum, hairdressers and manicurists scored high for happiness and low on stress. Construction jobs also seemed to tick the right boxes when it comes to happiness on the job.

For me, the more interesting analysis was one Yau did back in 2018. He looked at a dataset that came from asking 10,000 people what had made them happy in the past 24 hours. Then he parsed the language of those responses to look for the patterns that emerged. The two biggest categories that lead to happiness were “Achievement” and “Affection.”

From this, we start to see some common underpinnings for happiness: doing things for others, achieving the things that are important to us, spending time with our favorite people, bonding over shared experiences.

So let’s get back to the “pursuit of happiness”- something so important to Americans that they enshrined it in the Declaration of Indepedence. But, according to Stanford historian Caroline Winterer, in her 2017 TED talk, that definition of happiness is significantly different than what we currently think of. In her words, that happiness meant, “Every citizen thinking of the larger good, thinking of society, and thinking about the structures of government that would create a society that was peaceful and that would allow as many people as possible to flourish.”

When I think of happiness, that makes more sense. It also matches the other research I shared here. We seem happiest when we’re not focused on ourselves but we’re instead thinking about others. This is especially true when our happiness navel-gazing is measuring how we come up short on happiness when stacked against the unrealistic expectations set by social media.

Like too many things in our society, happiness has morphed from something good and noble into a selfish sense of entitlement.

(Image credit – Creative Commons License – https://www.flickr.com/photos/stevenanichols/2722210623)

Finding Your Happy Place

Where can you find happiness? According to a recent study from WalletHub, you’re statistically more likely to find it in Fremont or San Jose, California. It you’re in Madison, Wisconsin, you won’t be quite as happy, but you’ll still be ahead of 98.5% of the US. Fremont, San Jose and Madison are the three happiest cities in America.

If you live in Shreveport, Louisiana, Huntington, West Virginia or Detroit, Michigan, your life may well be a giant sucking hole of despair. Statistically, anyway. Those are the three least happy cities in the US.

Again, WalletHub’s words, not mine.

I know what you’re saying. You see these posts about happy places all the time in your feed. How much credence should you give them?

I’ll be honest. Normally, I scroll right past them. I don’t know what made me look at this one. Maybe it’s because I’ve recently been thinking stock of my own level of happiness. Or maybe I was thinking, “What the hell? I have a few minutes. Let’s try to quantify those whole happiness thing.”

The time might be right. As we claw our way out of a global pandemic and the various other catastrophes that bump up against each other as they jostle for our attention in our news feed, we can’t be blamed for wanting a little more happiness in our lives. I’m pretty sure that’s at least one of the factors behind the great resignation in the wake of Covid.

Also, more of us are choosing to work virtually from home. Wouldn’t it make sense to situate that home in the place where you’re happiest? More and more of our jobs aren’t tied to a physical location. We can live anywhere we want. So why shouldn’t that place be Fremont, California? And I’m told Madison has great cheese curds.

So, today I’m going to help you find that happy place.

First, maybe the focus on cities is a little too narrow. Who says we’re happiest in a city? Recent research has found that yes, in poorer countries, odds are you’ll be happier in a city than in the country. When the whole country is struggling to get by, there’s just more of what you need to survive in a city. But as countries become wealthier, that gap disappears and actually reverses itself, giving a slight happiness edge to those living beyond the city limits. So, if you’re looking for happiness, you might want to consider “movin’ to the country (where you’re) gonna eat a lot of peaches” (obscure pop reference for those of you over 55).

Let’s broaden our focus a mite, out to the happiest states. Luckily, the good folks at Wallet Hub have you covered there too. According to them, the three happiest states are (in order), Hawaii, Maryland and Minnesota. If you live in West Virginia, you better start re-examining your life choices. It scored lowest.

But who says the US is the be all and end all of happiness? Certainly not the World Happiness Report, which has to be the definitive source on all things happy. According to it, the 5 happiest countries on earth are (again in order) – Finland, Denmark, Iceland, Switzerland and the Netherlands. The US is quite a bit down the list in the sixteenth slot.

Perhaps happiness is positively correlated with pickled herring and lingonberries.

Now, for reasons I’ll explore in a future post, I urge you to take those whole empirical approach to happiness with a grain of salt, but there must be something to all these happiness rankings. These countries traditionally top the various lists of best places to life. One has to wonder why? Or, at least, this “one” wondered why.

So I put together a spread sheet of the 20 happiest countries in the study and started looking for the common denominator of happiness. I looked at 5 different potential candidates (including some from the Global Sustainability Competitive Index): Gross Domestic Product per Capita, Social Capital, Natural Capital, Governance Performance and Liberal Democracy.

First of all, money may not buy happiness, but it certainly doesn’t hurt. There was a pretty strong correlation between GDP per capita and the happiness score. It seems that, up to a point, we need enough money to be comfortable to be happy. But, as wealth accumulates, happiness begins to plateau. The world’s longest running happiness study has repeatedly shown this. Marc Schulz, author of “The Good Life”, said “money can’t buy us happiness, but it’s a tool that can give us security and safety and a sense of control over lives.”

Another fairly strong correlation was with Natural Capital, which is defined as having adequate access to clean water and air, as well as proximity to forests, oceans and healthy biodiversity. This had a correlation just slightly lower than the one with GDP per capita.

Much as I would have liked it to be a little higher, given my own political leanings, there was a weaker correlation between liberal democracy and happiness. But, in the silver lining category, there was a strong correlation between liberal democracy and governance performance. The world’s happiest places tend to be places with either a constitutional monarchy and/or a parliamentary system overseeing a social democracy. Take that for what it’s worth.

Surprisingly, the weakest correlation was between effective governance and happiness. That said, it was still a significant correlation, so it did play a part in creating the conditions required for happiness.

All of the above factors run the risk of us conflating correlation and causation. There are certain things that are table stakes for happiness. A reasonable degree of good governance, a safe environment and a healthy economy are three of these. We need them to be happy, but they don’t cause us to be happy.

The last factor, which had the strongest correlation by a significant margin, is different. Not surprisingly, social capital is a direct cause of happiness. If you want to be happy, live somewhere where people love and care for each other. Denmark, the second happiest place on earth, is the home of “hygge” – a general sense of coziness. As I’ve said before, the Danes have “created an environment that leads to bumping into each other.”

 It’s in this beneficial societal friction where you’re statistically more likely to find happiness, wherever you live.

(Image https://www.flickr.com/photos/marcygallery/3803517719 – Creative Commons License)

Real Life Usually Lives Beyond The Data

There’s an intriguing little show you’ve probably never heard of on Netflix that might be worth checking out. It’s called Travelers and it’s a Canadian produced Sci-Fi show that ran from 2016 to 2018. The only face in it you’re probably recognize is Eric McCormack, the Will from Will and Grace. He also happens to be the producer of the series.

The premise is this – special operatives from the future (the “travelers”) – travel back in time to the present to prevent the collapse of society. They essential “body snatch” everyday people from our present at the exact moment of their death and use their lives as a cover to fulfill their mission.

And that’s not even the interesting part.

The real intrigue of the show comes from the everyday conflicts which come from an imperfect shoe horning of a stranger into the target’s real-world experience. The show runners do a masterful job of weaving this into their storylines: the joy of eating a hamburger, your stomach turning at the thought of drinking actual milk from a cow, calling your “wife” her real name when you haven’t called her that in all the time you’ve known her.  And it’s in this that I discovered an unexpected parallel to our current approach to marketing.

This is a bit of a detour, so bear with me.

In the future, the research team compiles as much as they can about each of the people they’re going to “borrow” for their operatives. The profiles are compiled from social media, public records and everything they can discover from the data available.

But when the “traveler” actually takes over their life, there are no end of surprises and challenges – made up of all the trivial stuff that didn’t make it into the data profile.

You probably see where I’m going with this. When we rely solely on data to try to understand our customers or prospects, there will always be surprises. You can only learn these little quirks and nuances by diving into their lives.

That’s what A.G. Lafley, CEO of Proctor and Gamble from 2000 to 2010 and then again from 20153 to 2015, knew. In a profile on Lafley which Forbes did in 2002, writer Luisa Kroll said,

“Like the monarch in Mark Twain’s A Connecticut Yankee in King Arthurs’ Court, Lafley often makes house calls incognito to find out what’s the minds of his subjects. ‘Too much time was being spent inside Procter & Gamble and not enough outside,’ says Lafley who took over during a turbulent period two years ago. ‘I am a broken record when it comes to saying, ‘We have to focus on the customer.'”

It wasn’t a bad way to run a business. Under Lafley’s guidance, P&G doubled their market cap, making them one of the 10 most valuable companies in the world.

Humans are messy and organic. Data isn’t. Data demands to be categorized, organized and columnized. When we deal with data, we necessarily have to treat it like data. And when we do that, we’re going to miss some stuff – probably a lot of stuff. And almost all of it will be the stuff of our lives, the things that drive behavior, the sparks that light our emotions.

It requires two different ways of thinking. Data sits in our prefrontal lobes, demanding the brain to be relentlessly rational. Data reduces behavior to bits and bytes, to be manipulated by algorithms into plotted trendlines and linear graphs. In fact, automation today can totally remove we humans from the process. Data and A.I. work together to pull the levers and push the buttons on our advertising strategies. We just watch the dashboard.

But there’s another way of thinking – one that skulks down in the brain’s subcortical basement, jammed in the corner between the amygdala and the ventral striatum. It’s here where we stack all the stuff that makes us human; all the quirks and emotions, all our manias and motivations. This stuff is not rational, it’s not logical, it’s just life.

That’s the stuff A.G. Lafley found when he walked out the front door of Proctor and Gamble’s headquarters in Cincinnati and into the homes of their customers. And that’s the stuff the showrunners of Travelers had the insight to include in their narratives.

It’s the stuff that can make us sensational or stupid – often at the same time.

Why Infuriating Your Customers May Not Be a Great Business Strategy

“Online, brand value is built through experience, not exposure”

First, a confession. I didn’t say this. I wish I’d said it, but it was actually said by usability legend Jakob Nielsen at a workshop he did way back in 2006. I was in the audience, and I was listening.  Intently.

But now, some 17 years later, I have to wonder if anyone else was. According to a new study from Yext that Mediapost’s Laurie Sullivan looked at, many companies are still struggling with the concept. Here’s just a few tidbits from her report:

“47% (of leads) in a Yext survey saying they were unable to make an online purchase because the website’s help section did not provide the information needed.”

“On average respondents said it takes nearly 9 hours for a typical customer service issue to be resolved. Respondents said resolution should take about 14.5 minutes.”

“42% of respondents say that help sites do not often provide the answers they look for with a first search.”

“The biggest challenge, cited by 61%, is that the help site does not understand their question.”

This isn’t rocket science, people. If you piss your customers and prospects off, they will go find one of your competitors that doesn’t piss them off. And they won’t come back.

Perhaps the issue is that businesses doing business online have a bad case of the Lake Wobegon effect. This, according to Wikipedia, is a “a natural human tendency to overestimate one’s capabilities.” It came from Garrison Keillor’s description of his fictional town in Minnesota where “all the women are strong, all the men are good-looking, and all the children are above average”

When applied to businesses, it means that they think they’re much better at customer service than they actually are. In a 2005 study titled “Closing the delivery gap”, Global consulting firm Bain & Company found that 80% of companies believe they are delivering a superior service. And yet, only 8% of customers believe that they are receiving excellent service.

I couldn’t find an update to this study but I suspect this is probably still true. It’s also true that when it comes to judging the quality of your customer service, your customer is the only one that can do it. So you should listen to them.

If you don’t listen, the price you’re paying is huge. In yet another study, Call Centre Platform Provider TCN’s second annual “Consumer Insights about Customer Service,” 66% of Americans are likely to abandon a brand after a poor customer service experience.

Yet, for many companies, customer service is at the top of their cost-cutting hit list. According to the Bureau of Labor Statistics, the projected average growth rate for all occupations from 2020 – 2030 is 8%, but when looking at customer service specifically, the estimated growth is actually -4%. In many cases, this reduced head count is due to companies either outsourcing their customer service or swapping people for technology.

This is probably not a great move.

Again, according to the TCN study, when asked what their preferred method of communication with a company’s customer service department was, number one was “talking to a live agent by phone” with 49 % choosing it. Just behind was 45% choosing an “online chat with a live agent.”

Now, granted, this is coming from a company that just happens to provide these solutions, so take it with a grain of salt, but still, this is probably not the place you should be reducing your head count.

One final example of the importance customer service, not from a study but from my own circle of influencers. My wife and I recently booked a trip with my daughter and her husband and, like everyone else in the last few years, we found we had to cancel the trip. The trip was booked through Expedia so the credits, while issued by the carrier, had to be rebooked through Expedia.

My daughter tried to rebook online and soon found that she had to talk to an Expedia Customer Service Agent. We happened to be with her when she did this. It turned out she talked to not one, but three different agents. The first flatly refused to rebook and seemed to have no idea how the system worked. The second was slightly more helpful but suggested a way to rebook that my daughter wasn’t comfortable with. The third finally got the job done. This took about 3 hours on the phone, all to do something that should have taken 2 minutes online.

I haven’t mustered up the courage to attempt to rebook my credits yet. One thing I do know – it will involve whiskey.

What are the chances that we will book another flight on Expedia?    About the same as me making the 2024 Olympic Chinese Gymnastic Team.

Actually, that might have the edge.

Older? Sure. Wiser? Debatable.

I’ve always appreciated Mediapost Editor-in-Chief Joe Mandese’s take on things. It’s usually snarky, cynical and sarcastic, all things which are firmly in my wheelhouse. He also says things I may think but wouldn’t say for the sake of political politeness.

So when Joe gets a full head of steam up, as he did in that recent post which was entitled “Peak Idiocracy?”, I set aside some time to read it. I can vicariously fling aside my Canadian reticence and enjoy a generous helping of Mandesian snarkiness. In this case, the post was a recap of Mediapost’s 2023 Marketing Politics Conference – and the depths that political advertising is sinking to in order to appeal to younger demographics. Without stealing Joe’s thunder (please read the post if you haven’t) one example involved Tiktok and mouth mash-up filters. After the panel where this case study surfaced, Joe posed a question to the panelists.

“If this is how we are electing our representative leaders, do you feel like we’ve reached peak idiocracy in the sense that we are using mouth filters and Harry Potter memes to get their messages across?”

As Joe said, it was an “old guy question.” More than that, it was a cynical, smart, sarcastic old guy question. But the fact remains, it was an old guy question. One of the panelists, DGA Digital Director Laura Carlson responded:

“I don’t think we should discount young voters’ intelligence. I think being able to have fun with the news and have fun with politics and enjoy TikTok and enjoy the platform while also engaging with issues you care about is something I wouldn’t look down on. And I think more of it is better.”

There’s something to this. Maybe a lot to this.

First, I think we have fundamentally different idea of “messaging” from generation to generation. Our generation (technically I’m a Boomer, but the label Generation Jones is a better fit) grew up with the idea that information, whether it be on TV, newspaper, magazine or radio, was delivered as a complete package. There was a scarcity of information, and this bundling of curated information was our only choice for being informed.

That’s not the case for a generation raised with the Internet and social media. Becoming aware and being informed are often decoupled. In an environment jammed with information of all types – good and bad – Information foraging strategies have had to evolve. Now, you have to somehow pierce the information filters we have all put in place in order to spark awareness. If you are successful in doing that and can generate some curiosity, you have umpteen million sources just a few keystrokes away where you can become informed.

Still, we “old guys” (and “old gals” – for the sake of consistency, I’ll use the masculine label, but I mean it in the gender-neutral way) do have a valid perspective that shouldn’t be dismissed as us just being old and grumpy. We’ve been around long enough to see how actions and consequences are correlated. We’ve seen how seemingly trivial trends can have lasting impacts, both good and bad. There is experience here that can prove instructive.

But we also must appreciate that those a few generations behind us have built their own cognitive strategies to deal with information that are probably a better match for the media environment we live in today.

So let me pose a different question. If only one generation could vote, and if everyone’s future depended on that vote, which generation would you choose to give the ballots to? Pew Research did a generational breakdown on awareness of social issues and for me, the answer is clear. I would far rather put my future in the hands of Gen Z and Millennials than in the hands of my own generation. They are more socially aware, more compassionate, more committed to solving our many existential problems and more willing to hold our governments accountable.

So, yes, political advertising might be dumbed down to TikTok level for these younger voters, but they understand how the social media game is played. I think they are savvy enough to know that a TikTok mash up is not something to build a political ideology on. They accept it for what it is, a brazen attempt to scream just a little louder than the competition for their attention; standing out from the cacophony of media intrusiveness that engulfs them. If it has to be silly to do that, so be it.

Sure, the generation of Joe Mandese and myself grew up with “real” journalism: the nightly news with Dan Rather and Tom Brokaw, 60 Minutes, The MacNeil/Lehrer Report, the New York Times, The Washington Post. We were weaned on political debates that dealt with real issues.

And for all that, our generation still put Trump in the White House. So much for the wisdom of “old guys.”