Getting from A to Zen

We live in a Type A world. And sometimes, that’s to our detriment.

According to one definition, Type A is achievement oriented, competitive, fast-paced and impatient.

All of that pretty much sums up the environment we live in. But you know what’s hard to find in a Type A world? Your Zen.

I know what you’re thinking — “I didn’t peg Gord for a Zen-seeking kinda guy.” And you’re mostly right. I’m not much for meditation. I’ve tried it — it’s not for me. I’ll be honest. It feels a little too airy-fairy for my overly rational brain.

But I do love cutting the grass. I also love digging holes, retouching photos in Photoshop and cleaning pools. Those are some of the activities where I can find my Zen.

For best-selling author Peggy Orenstein, she found her Zen during COVID – shearing sheep. She shares her journey in her new book, “Unraveling: What I Learned About Life While Shearing Sheep, Dyeing Wool, and Making the World’s Ugliest Sweater.” Orenstein has a breezy, humorous, and self-deprecating style, but there are some deep thoughts here.

In reading the book, I learned it wasn’t the act of shearing where Peggy found her Zen. That’s because sheep shearing is really hard work. You can’t let your mind wander as you wrestle 200 to 300 pounds of Ovis aries, holding a buzzing, super-sharp set of sheers while trying to give it a haircut.

As Orenstein said in a recent interview, “Imagine you were in a ballet with Nureyev and nobody told you the steps. That was what it felt like to reach shearing sheep, you know, for the first time.”

No. You might find a lot of things in that activity, but Zen isn’t likely to be one of them. Orenstein finds her Zen in a less terrifying place, cleaning poop out of the newly shorn wool. She did it the way it’s been done for centuries, in a process called carding. While she carded the wool, she would “Facetime” her dad, who has dementia.

In the interview, she said, “You know, I could just slow down. These ancient arts are slow. They’re very slow and (I would) sit with him and just be next to him and have that time together and sing.”

When I heard her say that in the interview, that hit me. I said, “I have to read this book.” Because I got it. That slowing down, that inner connection, the very act of doing something that seems mindless but isn’t – because doing the act creates the space for your mind to think the thoughts it normally doesn’t have time to do. All that stuff is important.

To me, that’s my Zen.

Now, unless you’re a Mahayana Buddhist, Zen is probably nothing more than a buzzword that made its way westward into our zeitgeist sometime in the last century. I am certainly not a Buddhist, so I am not going to dare tell you the definitive meaning of Zen. I am just going to tell you what my version is.

For me, Zen is a few things:

I think these Zen acts have to contribute to the world in some small way. There has to be something at the end that gives you a sense of accomplishment – the feeling of a job well done.

Maybe that’s why meditation is not for me. There is not a tangible reward at the end. But you can look at a pile of newly shorn fleece or a lawn neatly delineated with the tire tracks of your lawnmower.

The brain must be engaged in a Zen task, but not too much. It needs some space to wander. Repetition helps. As you do the task, your mind eventually shifts to auto-pilot mode. And that’s when I find Zen, as my mind is given the license to explore.

I think this is where step one is important – whatever you’re doing has to be useful enough that you don’t feel that you’re wasting time doing it.

Finally, it helps if your Zen tasks are done in a place where the Type A world doesn’t intrude. You need the space to push back interruption and let your mind wander freely.

I realize there are some of you who will immediately connect with what I’m saying, and others who won’t have a clue. That’s okay.

I think that’s the magic of Zen: it’s not for everyone. But for those of us who understand how important it is, we sometimes need a little reminder to sometimes go seek it. Because in this Type A world, it’s becoming harder to find.

When the News Hits Home

My, how things have changed.

My intention was to write a follow up to last week’s post about Canada’s Bill C-18 and Meta’s banning of news on Facebook. I suppose this is a follow up of sorts. But thanks to Mother Nature – that ofttimes bully – that story was pushed right out of the queue to be replaced with something far more tragic and immediate.

To me, anyway.

I live in Kelowna. Chances are you’ve heard about my home in the last few days. If you haven’t, I can tell you that when I look out my window, all I can see is thick smoke. Which may be a good thing. Last Friday, when I could see, I spent the entire evening watching West Kelowna, across Okanagan Lake from my home, burn in the path of the oncoming McDougall Creek Wildfire. As the flames would suddenly leap towards the sky, you knew that was someone’s home being ignited.

We don’t know how many homes have been lost. The fire has been too active for authorities to have the time to count. We have firefighters and first responders pouring in from around our province to help. . Our Air Quality Index is 11 on a scale of 10, as bad as it can get. Thousands are out of their home. More thousands have their things packed by the door, ready to leave at a moment’s notice. We’re one of those.

But that’s enough about the fire. This post is about our weird relationship with the news.

When something like this happens, you have a very real, very visceral need to know what’s going on. For those of us that live here in British Columbia, the news has hit home in a way we could never imagine. A few posts ago, I said it might be healthier for me to ignore the news, because it’s always alarming and very seldom relevant to me. Well, those words are now coming back to haunt me.

This disaster has thrown our reliance on Facebook for new into stark relief. This last Friday, Canada’s Transportation Minster, Pablo Rodriguez, asked Meta to reverse its current ban on news, “We’ve seen that, throughout this emergency, Canadians have not had access to the crucial information they need. So, I ask Meta to reverse its decision, allow Canadians to have access to news on their platforms.”

But there’s another dimension to this that’s a bit more subtle yet even more frightening. It goes to the heart of how we handle crisis. I think you necessarily must “zoom in,” performing some type of terrible triage in your mind to be able to imagine the unimaginable. As the winds shift the fire away from your home, there’s relief. But other homes now lie in the path of the fire. In your head, you know that, but emotionally you can’t help but feel a lift. It’s not noble, but it’s human.

So let’s “zoom out” – a lot. We’re not the only ones this is happening to. This is a global crisis. Twenty-six thousand people are evacuated on the Spanish island of Tenerife. A friend of mine, who’s an airline pilot, was one week ago volunteering to fly people out of Maui who had lost their homes in the tragic Lahaina fire.

Take a look at Nasa’s FIRMS (Fire Information for Resource Management) website, which gives a global map of all hotspots from wildfires burning. I’ve set this link to wildfire activity in the last 7 days.

Scary as hell, right?

But can we actually process that, in a way that lets us move forward and start coping with this massive issue? Is it enough to change our behaviors in the way we must to finally start addressing climate change?

In a recent article on BBC.com, Richard Fisher talks about “Construal level theory” – which says that the greater the psychological distance there is between the news and your life, the less likely it is to make you change your behavior. For me, the psychological distance between myself and climate change is roughly 1 kilometer (just over half a mile) as the crow flies. That’s how far it is from my house to the nearest evacuation alert area.

It doesn’t get much closer than that.  But will we change? Will anything change?

I’m not so sure. We’ve been through this before. Exactly 20 years ago, the Okanagan Mountain wildfire raged through Kelowna, displacing over 30,000 people and destroying 239 homes. It was a summer much like this, at the time the driest summer on record. This year, we have smashed that record, as we have many times since that fire. Once we picked up, rebuilt our homes and got back to life, nothing really changed.

And now, here we are again. Let’s hope that this time is different.

It’s All in How You Spin It

I generally get about 100 PR pitches a week. And I’m just a guy who writes a post on tech, people and marketing now and then. I’m not a journalist. I’m not even gainfully employed by anyone. I am just one step removed — thanks to the platform  MediaPost has provided me — from “some guy” you might meet at your local coffee shop.

But still, I get 100 PR pitches a week. Desperation for coverage is the only reason I can think of for this to be so. 99.9999% of the time, they go straight to my trash basket. And the reason they do is that they’re almost never interesting. They are — well, they’re pitches for free exposure.

Now, the average pitch, even if it isn’t interesting, should at least try to match the target’s editorial interest. It should be in the strike zone, so to speak.

Let’s do a little postmortem on one I received recently. It was titled “AI in Banking.” Fair enough. I have written a few posts on AI. Specifically, I have written a few posts on my fear of AI.

I have also written about my concerns about misuse of data. When it comes to the nexus between AI and data, I would be considered more than a little pessimistic. So, something linking AI and banking did pique my interest, but not in a good way. I opened the email.

There, in the first paragraph, I read this: “AI is changing how banks provide personalized recommendations and insights based on enriched financial data offering tailored suggestions, such as optimizing spending, suggesting suitable investment opportunities, or identifying potential financial risks.”

This, for those of you not familiar with “PR-ese,” is what we in the biz call “spin.” Kellyanne Conway once called it — more euphemistically — an alternative fact.

Let me give you an example. Let’s say that during the Tour de France half the Peloton crashes and bicyclists get a nasty case of road rash. A PR person would spin that to say that “Hundreds of professional cyclists discover a new miracle instant exfoliation technique from the South of France.”

See? It’s not a lie, it’s just an alternative fact.

Let’s go on. The second paragraph of the pitch continued: “Bud, a company that specializes in data intelligence is working with major partners across the country (Goldman Sachs, HSBC, 1835i, etc.) to categorize and organize financial information and data so that users are empowered to make informed decisions and gain a deeper understanding of their financial situation.”

Ah — we’re now getting closer to the actual fact. The focus is beginning to switch from the user, empowered to make better financial decisions thanks to AI, to what is actually happening: a data marketplace being built on the backs of users for sale to corporate America.

Let’s now follow the link to Bud’s website. There, in big letters on the home page, you read:

“Turn transactional data into real-time underwriting intelligence

Bud’s AI platform and data visualizations help lenders evaluate risk, reduce losses and unlock hidden revenue potential.”

Bingo. This is not about users, at least, not beyond using them as grist in a data mill. This is about slipping a Trojan Horse into your smartphone in the form of an app and hoovering your personal data up to give big banks an intimate glimpse into not just your finances, but also your thinking about those finances. As you bare your monetary soul to this helpful “Bud,” you have established a direct pipeline to the very institutions that hold your future in their greedy little fingers. You’re giving an algorithm everything it needs to automatically deny you credit.

This was just one pitch that happened to catch my eye long enough to dig a little deeper. But it serves as a perfect illustration of why I don’t trust big data or AI in the hands of for-profit corporations.

And that will continue to be true — no matter how you PR pros spin it.

No News is Good News

I’m trying not to pay too much attention to the news. This is partly because I’m exhausted by the news, and partly because of the sad state of journalism today.

This isn’t just a “me” thing. Almost everyone I talk to says they’re trying to find coping mechanisms to deal with the news. The News industry – and its audience – has gone from being an essential part of a working democracy to something that is actually bad for you.  In an online essay from 4 years, Swiss author Rolf Dobelli equates news consumption to a bad diet:

“(translated from its original German) News is to the mind what sugar is to the body. News is appetizing, easily digestible and at the same time highly harmful. The media feeds us morsels of trivial stories, tidbits that by no means satisfy our hunger for knowledge. Unlike with books and long, well-researched, long articles, there is no saturation when consuming news. We can devour unlimited amounts of messages; they remain cheap sugar candies. As with sugar, the side effects only show up with a delay.”

Rolf Dobelli, 2019

This alarming state is due to the fact that the News (in the US) is supported by advertising, which means it has a ravenous appetite for eyeballs. Because of this, it is highly profitable to make news addictive.

This creates a state, as Dobelli points out, where even though the news is highly inflammatory, like a constantly jangling alarm bell, almost all the news we consume is irrelevant to our daily lives. While the news we watch pushes all our hot buttons, it doesn’t serve a useful purpose. In fact, it does the exact opposite: it leads to chronic mental and physical ill-being and may cause us to start ignoring the warning signs we should be paying attention to.

A study last year (McLaughlin, Gotlieb and Mills) found ties between problematic news consumption and mental ill-being. The study found that 16.5% of 1,100 people polled in an online survey showed signs of “severely problematic” news consumption, which led them to focus less on school, work and family, and contributed to an inability to sleep.

Dobelli’s essay goes even further, pointing a finger at excessive news consumption as the cause of a list of issues including cognitive errors, inhibiting deeper thinking, wasting time, killing creativity, making us more passive and even wiring our brains for addiction in a manner similar to drugs.

All these negative side effects come from chronic stress – a constant and pervasive alarmed state that excessive news consumption puts our brains into. And if you thought Dobelli’s list was scary, wait until you see the impact of chronic stress! It actually attacks the brain by releasing excessive amounts of cortisol and restricting the uptake of serotonin, which can increase inflammation, lead to depression, shrink your hippocampus and impact your memory, make it difficult to sleep and impair your ability to think rationally.

To put a new twist on an old saying, “No news is good news.”

But let’s put aside for a moment the physical and mental toll that news takes on us. Even if none of that were true, our constant diet of bad news can also lead to something known as “alarm fatigue.”

Alarm fatigue is essentially our response to the proverbial boy who calls wolf. After several false alarms, we stop paying attention. And on that one time when we should be paying attention, we are caught with our guard down.

There is one other problem with our news diet: it oversimplifies complex problems into simple sound bites. Thomas Jefferson said, “An educated citizenry is a vital requisite for our survival as a free people.” But when the news abdicates its role as an informer to pursue profit as entertainment, it is no longer educating us. It is pandering to us by stuffing bite sized opinion pieces that reinforce our beliefs – right or wrong. We are never challenged to examine our beliefs or explore the complexity of the wicked problems that confront us. Real journalism has been replaced by profitable punditry.

All this leaves us with a choice. Until the News industry cleans up its act (I’m not holding my breath), you’re likely far better off to ignore it. Or at least, ignore the profit driven platforms that are hungry for eyeballs. Stay informed by turning to books, long articles and true investigative journalism. That’s what I’m going to start doing.

Failing all that, just think about things. I understand it’s good for you.

Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)

The Pursuit of Happiness

Last week, I talked about physical places where you can find happiness – places like Fremont, California, the happiest city in the US, or Finland, the happiest country in the world.

But, of course, happiness isn’t a place. It’s a state of mind. You don’t find happiness. You experience happiness. And the nature of that experience is a tough thing to nail down.

That could be why the world Happiness Survey was called “complete crap” by opinion columnist Kyle Smith back in 2017:

“These surveys depend on subjective self-reporting, not to mention eliding cultural differences. In Japan there is a cultural bias against boasting of one’s good fortune, and in East Asia the most common response, by far, is to report one’s happiness as average. In Scandinavia, meanwhile, there is immense societal pressure to tell everyone how happy you are, right up to the moment when you’re sticking your head in the oven.”

Kyle Smith, 2017

And that’s the problem with happiness. It’s kind of like quantum mechanics – the minute you try to measure it, it changes.

Do you ever remember your grandparents trying to measure their happiness? It wasn’t a thing they thought about. Sometimes they were happy, sometimes they weren’t. But they didn’t dwell on it. They had other, more pressing, matters to think about. And if you asked them to self-report their state of happiness, they’d look at you like you had just given birth to a three horned billy goat.

Maybe we think too much about happiness. Maybe we’re setting our expectations too high. A 2011 study (Mauss, Tamir, Anderson & Savino) found that the pursuit of happiness may lead to the opposite outcome, never being happy. “People who highly value happiness set happiness standards that are difficult to obtain, leading them to feel disappointed about how they feel, paradoxically decreasing their happiness the more they want it.”

This is a real problem, especially in today’s media environment. Never in our lives have we been more obsessed with the pursuit of happiness. The problem comes with how we define that happiness. If you look at how media portrays happiness, it’s a pretty self-centred concept. It’s really all about us: what we have, where we are, how we’re feeling, what we’re doing. And all that is measured against what should make us happier.

That’s where the problem of measurement raises its prickly little head. In 1971, social scientists Philip Brickman and Donald T. Campbell came up with something called the “happiness set point.” They wanted to see if major life events – both negative and positive – actually changed how happy people were. The initial study and follow ups that further explored the question found that after initial shift in happiness after major events such as lottery wins, big promotions or life-altering accidents, people gradually returned to a happiness baseline.

But more recent academic work has found that it’s not quite so simple. First of all, there’s no such thing as a universal happiness “set point.” We all have different baselines of how happy we are. Also, some of us are more apt to respond, either positively or negatively, to major life events.

There are life events that can remove the foundations of happiness – for example, losing your job, causing a significant downtown in your economic status. As I mentioned before, money may not buy happiness, but economic stability is correlated with happiness.

What can make a difference in happiness is what we spend time doing. And in this case, life events can set up the foundations of changes that can either lead to more happiness or less. Generally, anything that leads to more interaction with others generally makes us happier. Anything that leads to social withdrawal tends to make us less happy.

So maybe happiness isn’t so much about how we feel, but rather a product of what we do.

Continuing on this theme, I found a couple of interesting data visualizations by statistician Nathan Yau. The most recent one examined the things that people did at work that made them happy.

If you’re in the legal profession, I have bad news. That ranked highest for stress and low for happiness and meaningfulness. On the other end of the spectrum, hairdressers and manicurists scored high for happiness and low on stress. Construction jobs also seemed to tick the right boxes when it comes to happiness on the job.

For me, the more interesting analysis was one Yau did back in 2018. He looked at a dataset that came from asking 10,000 people what had made them happy in the past 24 hours. Then he parsed the language of those responses to look for the patterns that emerged. The two biggest categories that lead to happiness were “Achievement” and “Affection.”

From this, we start to see some common underpinnings for happiness: doing things for others, achieving the things that are important to us, spending time with our favorite people, bonding over shared experiences.

So let’s get back to the “pursuit of happiness”- something so important to Americans that they enshrined it in the Declaration of Indepedence. But, according to Stanford historian Caroline Winterer, in her 2017 TED talk, that definition of happiness is significantly different than what we currently think of. In her words, that happiness meant, “Every citizen thinking of the larger good, thinking of society, and thinking about the structures of government that would create a society that was peaceful and that would allow as many people as possible to flourish.”

When I think of happiness, that makes more sense. It also matches the other research I shared here. We seem happiest when we’re not focused on ourselves but we’re instead thinking about others. This is especially true when our happiness navel-gazing is measuring how we come up short on happiness when stacked against the unrealistic expectations set by social media.

Like too many things in our society, happiness has morphed from something good and noble into a selfish sense of entitlement.

(Image credit – Creative Commons License – https://www.flickr.com/photos/stevenanichols/2722210623)

Finding Your Happy Place

Where can you find happiness? According to a recent study from WalletHub, you’re statistically more likely to find it in Fremont or San Jose, California. It you’re in Madison, Wisconsin, you won’t be quite as happy, but you’ll still be ahead of 98.5% of the US. Fremont, San Jose and Madison are the three happiest cities in America.

If you live in Shreveport, Louisiana, Huntington, West Virginia or Detroit, Michigan, your life may well be a giant sucking hole of despair. Statistically, anyway. Those are the three least happy cities in the US.

Again, WalletHub’s words, not mine.

I know what you’re saying. You see these posts about happy places all the time in your feed. How much credence should you give them?

I’ll be honest. Normally, I scroll right past them. I don’t know what made me look at this one. Maybe it’s because I’ve recently been thinking stock of my own level of happiness. Or maybe I was thinking, “What the hell? I have a few minutes. Let’s try to quantify those whole happiness thing.”

The time might be right. As we claw our way out of a global pandemic and the various other catastrophes that bump up against each other as they jostle for our attention in our news feed, we can’t be blamed for wanting a little more happiness in our lives. I’m pretty sure that’s at least one of the factors behind the great resignation in the wake of Covid.

Also, more of us are choosing to work virtually from home. Wouldn’t it make sense to situate that home in the place where you’re happiest? More and more of our jobs aren’t tied to a physical location. We can live anywhere we want. So why shouldn’t that place be Fremont, California? And I’m told Madison has great cheese curds.

So, today I’m going to help you find that happy place.

First, maybe the focus on cities is a little too narrow. Who says we’re happiest in a city? Recent research has found that yes, in poorer countries, odds are you’ll be happier in a city than in the country. When the whole country is struggling to get by, there’s just more of what you need to survive in a city. But as countries become wealthier, that gap disappears and actually reverses itself, giving a slight happiness edge to those living beyond the city limits. So, if you’re looking for happiness, you might want to consider “movin’ to the country (where you’re) gonna eat a lot of peaches” (obscure pop reference for those of you over 55).

Let’s broaden our focus a mite, out to the happiest states. Luckily, the good folks at Wallet Hub have you covered there too. According to them, the three happiest states are (in order), Hawaii, Maryland and Minnesota. If you live in West Virginia, you better start re-examining your life choices. It scored lowest.

But who says the US is the be all and end all of happiness? Certainly not the World Happiness Report, which has to be the definitive source on all things happy. According to it, the 5 happiest countries on earth are (again in order) – Finland, Denmark, Iceland, Switzerland and the Netherlands. The US is quite a bit down the list in the sixteenth slot.

Perhaps happiness is positively correlated with pickled herring and lingonberries.

Now, for reasons I’ll explore in a future post, I urge you to take those whole empirical approach to happiness with a grain of salt, but there must be something to all these happiness rankings. These countries traditionally top the various lists of best places to life. One has to wonder why? Or, at least, this “one” wondered why.

So I put together a spread sheet of the 20 happiest countries in the study and started looking for the common denominator of happiness. I looked at 5 different potential candidates (including some from the Global Sustainability Competitive Index): Gross Domestic Product per Capita, Social Capital, Natural Capital, Governance Performance and Liberal Democracy.

First of all, money may not buy happiness, but it certainly doesn’t hurt. There was a pretty strong correlation between GDP per capita and the happiness score. It seems that, up to a point, we need enough money to be comfortable to be happy. But, as wealth accumulates, happiness begins to plateau. The world’s longest running happiness study has repeatedly shown this. Marc Schulz, author of “The Good Life”, said “money can’t buy us happiness, but it’s a tool that can give us security and safety and a sense of control over lives.”

Another fairly strong correlation was with Natural Capital, which is defined as having adequate access to clean water and air, as well as proximity to forests, oceans and healthy biodiversity. This had a correlation just slightly lower than the one with GDP per capita.

Much as I would have liked it to be a little higher, given my own political leanings, there was a weaker correlation between liberal democracy and happiness. But, in the silver lining category, there was a strong correlation between liberal democracy and governance performance. The world’s happiest places tend to be places with either a constitutional monarchy and/or a parliamentary system overseeing a social democracy. Take that for what it’s worth.

Surprisingly, the weakest correlation was between effective governance and happiness. That said, it was still a significant correlation, so it did play a part in creating the conditions required for happiness.

All of the above factors run the risk of us conflating correlation and causation. There are certain things that are table stakes for happiness. A reasonable degree of good governance, a safe environment and a healthy economy are three of these. We need them to be happy, but they don’t cause us to be happy.

The last factor, which had the strongest correlation by a significant margin, is different. Not surprisingly, social capital is a direct cause of happiness. If you want to be happy, live somewhere where people love and care for each other. Denmark, the second happiest place on earth, is the home of “hygge” – a general sense of coziness. As I’ve said before, the Danes have “created an environment that leads to bumping into each other.”

 It’s in this beneficial societal friction where you’re statistically more likely to find happiness, wherever you live.

(Image https://www.flickr.com/photos/marcygallery/3803517719 – Creative Commons License)