AI, Creativity and the Last Beatle’s Song

I have never been accused of being a Luddite. Typically, I’m on the other end of the adoption curve – one of the first to adopt a new technology. But when it comes to AI, I am stepping forward gingerly.

Now, my hesitancy notwithstanding, AI is here to stay. In my world, it is well past the tipping point from a thing that exists solely in the domain to tech to a topic of conversation for everyone, from butchers to bakers to candlestick makers. Everywhere I turn now I see those ubiquitous two letters – AI. That was especially true in the last week, with the turmoil around Sam Altman and the “is he fired/isn’t he” drama at OpenAI.

In 1991 Geoffery Moore wrote the book Crossing the Chasm, looking at how technologies are adopted. He explained that it depends on the nature of the technology itself. If it’s a continuation of technology we understand, the adoption follows a fairly straight-forward bell curve through the general population.

But if it’s a disruptive technology – one that we’re not familiar with – then adoption plots itself out on an S-Curve. The tipping point in the middle of that curve where it switches from being skinny to being fat is what he called the “chasm.” Some technologies get stuck on the wrong side of the chasm, never to be adopted by the majority of the market.  Think Google Glass, for example.

There is often a pattern to the adoption of disruptive technologies (and AI definitely fits this description).  To begin with, we find a way to adapt it and use it for the things we’re already doing. But somewhere along the line, innovators grasp the full potential of the technology and apply it in completely new ways, pushing capabilities forward exponentially. And it’s in that push forward where all the societal disruption occurs. Suddenly, all the unintended consequences make themselves known.

This is exactly where we seem to be with AI. Most of us are using it to tweak the things we’ve always done. But the prescient amongst us are starting to look at what might be, and for many of us, we’re doing so with a furrowed brow. We’re worried, and, I suspect, with good reason.

As one example, I’ve been thinking about AI and creativity. As someone who has always dabbled in creative design, media production and writing, this has been top of mind for me. I have often tried to pry open the mystic box that is the creative process.

There are many, creative software developers foremost amongst them, that will tell you that AI will be a game changer when it comes to creating – well – just about anything.

Or, in the case of the last Beatle single to be released, recreating anything. Now and Then, the final Beatles song featuring the Fab Four, was made possible by an AI program created by Peter Jackson’s team for the documentary Get Back. It allowed Paul McCartney, Ringo Starr and their team of producers (headed by George Martin’s son Giles) to separate John Lennon’s vocals from the piano background on a demo tape from 1978.

One last Beatle’s song featuring John Lennon – that should be a good thing – right?  I guess. But there’s a flip side to this.

Let’s take writing, for example. Ask anyone who has written something longer than a tweet or Instagram post. What you start out intending to write is never what you end up with. Somehow, the process of writing takes its own twists and turns, usually surprising even the writer. Even these posts, which average only 700 to 800 words, usually end up going in unexpected directions by the time I place the final period.

Creativity is an iterative process and there are stages in that process. It takes time for it all to  play out. No matter how good my initial idea is, if I simply fed it in an AI black box and hit the “create” button, I don’t know if the outcome would be something I would be happy with.

“But,” you protest, “what about AI taking the drudgery out of the creative process? What if you use it to clean up a photo, or remove background noise from an audio recording (a la the Beatles single). That should free up more time and more options for you to be creative, right?”

That’s promise is certainly what’s being pitched by AI merchants right now. And it makes sense. But it only makes sense at the skinny end of the adoption curve. That’s where we’re at right now, using AI as a new tool to do old jobs. If we think that’s where we’re going to stay, I’m pretty sure we’re being naïve.

I believe creativity needs some sweat. It benefits from a timeline that allows for thinking, and rethinking, over and over again. I don’t believe creativity comes from instant gratification, which is what AI gives us. It comes from iteration that creates the spaces needed for inspiration.

Now, I may be wrong. Perhaps AI’s ability to instantly produce hundreds of variation of an idea will prove the proponents right. It may unleash more creativity than ever. But I still believe we will lose an essential human element in the process that is critical to the act of creation.

Time will tell. And I suspect it won’t take very long.

(Image – The Beatles in WPAP – wendhahai)

Getting from A to Zen

We live in a Type A world. And sometimes, that’s to our detriment.

According to one definition, Type A is achievement oriented, competitive, fast-paced and impatient.

All of that pretty much sums up the environment we live in. But you know what’s hard to find in a Type A world? Your Zen.

I know what you’re thinking — “I didn’t peg Gord for a Zen-seeking kinda guy.” And you’re mostly right. I’m not much for meditation. I’ve tried it — it’s not for me. I’ll be honest. It feels a little too airy-fairy for my overly rational brain.

But I do love cutting the grass. I also love digging holes, retouching photos in Photoshop and cleaning pools. Those are some of the activities where I can find my Zen.

For best-selling author Peggy Orenstein, she found her Zen during COVID – shearing sheep. She shares her journey in her new book, “Unraveling: What I Learned About Life While Shearing Sheep, Dyeing Wool, and Making the World’s Ugliest Sweater.” Orenstein has a breezy, humorous, and self-deprecating style, but there are some deep thoughts here.

In reading the book, I learned it wasn’t the act of shearing where Peggy found her Zen. That’s because sheep shearing is really hard work. You can’t let your mind wander as you wrestle 200 to 300 pounds of Ovis aries, holding a buzzing, super-sharp set of sheers while trying to give it a haircut.

As Orenstein said in a recent interview, “Imagine you were in a ballet with Nureyev and nobody told you the steps. That was what it felt like to reach shearing sheep, you know, for the first time.”

No. You might find a lot of things in that activity, but Zen isn’t likely to be one of them. Orenstein finds her Zen in a less terrifying place, cleaning poop out of the newly shorn wool. She did it the way it’s been done for centuries, in a process called carding. While she carded the wool, she would “Facetime” her dad, who has dementia.

In the interview, she said, “You know, I could just slow down. These ancient arts are slow. They’re very slow and (I would) sit with him and just be next to him and have that time together and sing.”

When I heard her say that in the interview, that hit me. I said, “I have to read this book.” Because I got it. That slowing down, that inner connection, the very act of doing something that seems mindless but isn’t – because doing the act creates the space for your mind to think the thoughts it normally doesn’t have time to do. All that stuff is important.

To me, that’s my Zen.

Now, unless you’re a Mahayana Buddhist, Zen is probably nothing more than a buzzword that made its way westward into our zeitgeist sometime in the last century. I am certainly not a Buddhist, so I am not going to dare tell you the definitive meaning of Zen. I am just going to tell you what my version is.

For me, Zen is a few things:

I think these Zen acts have to contribute to the world in some small way. There has to be something at the end that gives you a sense of accomplishment – the feeling of a job well done.

Maybe that’s why meditation is not for me. There is not a tangible reward at the end. But you can look at a pile of newly shorn fleece or a lawn neatly delineated with the tire tracks of your lawnmower.

The brain must be engaged in a Zen task, but not too much. It needs some space to wander. Repetition helps. As you do the task, your mind eventually shifts to auto-pilot mode. And that’s when I find Zen, as my mind is given the license to explore.

I think this is where step one is important – whatever you’re doing has to be useful enough that you don’t feel that you’re wasting time doing it.

Finally, it helps if your Zen tasks are done in a place where the Type A world doesn’t intrude. You need the space to push back interruption and let your mind wander freely.

I realize there are some of you who will immediately connect with what I’m saying, and others who won’t have a clue. That’s okay.

I think that’s the magic of Zen: it’s not for everyone. But for those of us who understand how important it is, we sometimes need a little reminder to sometimes go seek it. Because in this Type A world, it’s becoming harder to find.

No News is Good News

I’m trying not to pay too much attention to the news. This is partly because I’m exhausted by the news, and partly because of the sad state of journalism today.

This isn’t just a “me” thing. Almost everyone I talk to says they’re trying to find coping mechanisms to deal with the news. The News industry – and its audience – has gone from being an essential part of a working democracy to something that is actually bad for you.  In an online essay from 4 years, Swiss author Rolf Dobelli equates news consumption to a bad diet:

“(translated from its original German) News is to the mind what sugar is to the body. News is appetizing, easily digestible and at the same time highly harmful. The media feeds us morsels of trivial stories, tidbits that by no means satisfy our hunger for knowledge. Unlike with books and long, well-researched, long articles, there is no saturation when consuming news. We can devour unlimited amounts of messages; they remain cheap sugar candies. As with sugar, the side effects only show up with a delay.”

Rolf Dobelli, 2019

This alarming state is due to the fact that the News (in the US) is supported by advertising, which means it has a ravenous appetite for eyeballs. Because of this, it is highly profitable to make news addictive.

This creates a state, as Dobelli points out, where even though the news is highly inflammatory, like a constantly jangling alarm bell, almost all the news we consume is irrelevant to our daily lives. While the news we watch pushes all our hot buttons, it doesn’t serve a useful purpose. In fact, it does the exact opposite: it leads to chronic mental and physical ill-being and may cause us to start ignoring the warning signs we should be paying attention to.

A study last year (McLaughlin, Gotlieb and Mills) found ties between problematic news consumption and mental ill-being. The study found that 16.5% of 1,100 people polled in an online survey showed signs of “severely problematic” news consumption, which led them to focus less on school, work and family, and contributed to an inability to sleep.

Dobelli’s essay goes even further, pointing a finger at excessive news consumption as the cause of a list of issues including cognitive errors, inhibiting deeper thinking, wasting time, killing creativity, making us more passive and even wiring our brains for addiction in a manner similar to drugs.

All these negative side effects come from chronic stress – a constant and pervasive alarmed state that excessive news consumption puts our brains into. And if you thought Dobelli’s list was scary, wait until you see the impact of chronic stress! It actually attacks the brain by releasing excessive amounts of cortisol and restricting the uptake of serotonin, which can increase inflammation, lead to depression, shrink your hippocampus and impact your memory, make it difficult to sleep and impair your ability to think rationally.

To put a new twist on an old saying, “No news is good news.”

But let’s put aside for a moment the physical and mental toll that news takes on us. Even if none of that were true, our constant diet of bad news can also lead to something known as “alarm fatigue.”

Alarm fatigue is essentially our response to the proverbial boy who calls wolf. After several false alarms, we stop paying attention. And on that one time when we should be paying attention, we are caught with our guard down.

There is one other problem with our news diet: it oversimplifies complex problems into simple sound bites. Thomas Jefferson said, “An educated citizenry is a vital requisite for our survival as a free people.” But when the news abdicates its role as an informer to pursue profit as entertainment, it is no longer educating us. It is pandering to us by stuffing bite sized opinion pieces that reinforce our beliefs – right or wrong. We are never challenged to examine our beliefs or explore the complexity of the wicked problems that confront us. Real journalism has been replaced by profitable punditry.

All this leaves us with a choice. Until the News industry cleans up its act (I’m not holding my breath), you’re likely far better off to ignore it. Or at least, ignore the profit driven platforms that are hungry for eyeballs. Stay informed by turning to books, long articles and true investigative journalism. That’s what I’m going to start doing.

Failing all that, just think about things. I understand it’s good for you.

Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

Search and ChatGPT – You Still Can’t Get There From Here

I’m wrapping up my ChatGPTrilogy with a shout out to an old friend that will be familiar to many Mediaposters – Aaron Goldman. 13 years ago Aaron wrote a book called Everything I Know About Marketing I Learned from Google.  Just a few weeks ago, Aaron shared a post entitled “In a World of AI, is Everything I Know about Marketing (still) Learned from Google”. In it, he looked at the last chapter of the book, which he called Future-Proofing. Part of that chapter was based on a conversation Aaron and I had back in 2010 about what search might look like in the future.

Did we get it right? Well, remarkably, we got a lot more right than we got wrong, especially with the advent of Natural Language tools such as ChatGPT and virtual assistants like Siri.

We talked a lot about something I called “app-sistants”. I explained, “the idea of search as a destination is an idea whose days are numbered. The important thing won’t be search. It will be the platform and the apps that run on it. The next big thing will be the ability to seamlessly find just the right app for your intent and utilize it immediately.” In this context, “the information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

To be honest, this evolution in search has taken a lot longer than I thought back then, “Intent will be more fully supported from end to end. Right now, we have to keep our master ‘intent’ plan in place as we handle the individual tasks on the way to that intent.”

Searching for complex answers as it currently sits requires a lot of heavy lifting. In that discussion, I used the example of planning a trip.  “Imagine if there were an app that could keep my master intent in mind for the entire process. It would know what my end goal was, would be tailored to understand my personal preferences and would use search to go out and gather the required information. When we look at alignment of intent, [a shift from search to apps is] a really intriguing concept for marketers to consider.”

So, the big question is, do we have such a tool? Is it ChatGPT? I decided to give it a try and see. After feeding ChatGPT a couple of carefully crafted prompts about a trip I’d like to take to Eastern Europe someday, I decided the answer is no. We’re not quite there yet. But we’re closer.

After a couple of iterations, ChatGPT did a credible job of assembling a potential itinerary of a trip to Croatia and Slovenia. It even made me aware of some options I hadn’t run across in my previous research. But it left me hanging well short of the “app-ssistant” I was dreaming of in 2010. Essentially, I got a suggestion but all the detail work to put it into an actual trip still required me to do hundreds of searches in various places.

The problem with ChatGPT is that it gets stuck between the millions of functionality siloes – or “walled gardens” – that make up the Internet. Those “walled gardens” exist because they represent opportunities for monetization. In order for an app-ssistant to be able to multitask and make our lives easier, we need a virtual “commonage” that gets rid of some of these walls. And that’s probably the biggest reason we haven’t seen a truly useful iteration of the functionality I predicted more than a decade ago.

This conflict between capitalism and the concept of a commonage goes back at least to the Magna Carta. As England’s economy transitioned from feudalism to capitalism, enclosure saw the building of fences and the wiping out of lands held as a commonage. The actual landscape became a collection of walled gardens that the enforced property rights of each parcel and the future production value of those parcels.

This history, which played out over hundreds of years, was repeated and compressed into a few decades online. We went from the naïve idealism of a “free for all” internet in the early days to the balkanized patchwork of monetization siloes that currently make up the Web.

Right now, search engines are the closest thing we have to a commonage on the virtual landscape. Search engines like Google can pull data from within many gardens, but if we actually try to use the data, we won’t get far before we run into a wall.

To go back to the idea of trip planning, I might be able to see what it costs to fly to Rome or what the cost of accommodations in Venice is on a search engine, but I can’t book a flight or reserve a room. To do that, I have to visit an online booking site. If I’m on a search engine, I can manually navigate this transition fairly easily. But it would stop something like ChatGPT in its tracks.

When I talked to Aaron 13 years ago, I envisioned search becoming a platform that lived underneath apps which could provide more functionality to the user. But I also was skeptical about Google’s willingness to do this, as I stated in a later post here on Mediapost.  In that post, I thought that this might be an easier transition for Microsoft.

Whether it was prescience or just dumb luck, it is indeed Microsoft taking the first steps towards integrating search with ChatGPT, through its recent integration with Bing. Expedia (who also has Microsoft DNA in its genome) has also taken a shot at integrating ChatGPT in a natural language chat interface.

This flips my original forecast on its head. Rather than the data becoming common ground, it’s the chat interface that’s popping up everywhere. Rather than tearing down the walls that divide the online landscape, ChatGPT is being tacked up as window decoration on those walls.

I did try planning that same trip on both Bing and Expedia. Bing – alas – also left me well short of my imagined destination. Expedia – being a monetization site to begin with – got me a little closer, but it still didn’t seem that I could get to where I wanted to go.

I’m sorry to say search didn’t come nearly as far as I hoped it would 13 years ago. Even with ChatGPT thumbtacked onto the interface, we’re just not there yet.

(Feature Image: OpenAI Art generated from the prompt: “A Van Gogh painting of a chatbot on a visit to Croatia”)

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)

The Pursuit of Happiness

Last week, I talked about physical places where you can find happiness – places like Fremont, California, the happiest city in the US, or Finland, the happiest country in the world.

But, of course, happiness isn’t a place. It’s a state of mind. You don’t find happiness. You experience happiness. And the nature of that experience is a tough thing to nail down.

That could be why the world Happiness Survey was called “complete crap” by opinion columnist Kyle Smith back in 2017:

“These surveys depend on subjective self-reporting, not to mention eliding cultural differences. In Japan there is a cultural bias against boasting of one’s good fortune, and in East Asia the most common response, by far, is to report one’s happiness as average. In Scandinavia, meanwhile, there is immense societal pressure to tell everyone how happy you are, right up to the moment when you’re sticking your head in the oven.”

Kyle Smith, 2017

And that’s the problem with happiness. It’s kind of like quantum mechanics – the minute you try to measure it, it changes.

Do you ever remember your grandparents trying to measure their happiness? It wasn’t a thing they thought about. Sometimes they were happy, sometimes they weren’t. But they didn’t dwell on it. They had other, more pressing, matters to think about. And if you asked them to self-report their state of happiness, they’d look at you like you had just given birth to a three horned billy goat.

Maybe we think too much about happiness. Maybe we’re setting our expectations too high. A 2011 study (Mauss, Tamir, Anderson & Savino) found that the pursuit of happiness may lead to the opposite outcome, never being happy. “People who highly value happiness set happiness standards that are difficult to obtain, leading them to feel disappointed about how they feel, paradoxically decreasing their happiness the more they want it.”

This is a real problem, especially in today’s media environment. Never in our lives have we been more obsessed with the pursuit of happiness. The problem comes with how we define that happiness. If you look at how media portrays happiness, it’s a pretty self-centred concept. It’s really all about us: what we have, where we are, how we’re feeling, what we’re doing. And all that is measured against what should make us happier.

That’s where the problem of measurement raises its prickly little head. In 1971, social scientists Philip Brickman and Donald T. Campbell came up with something called the “happiness set point.” They wanted to see if major life events – both negative and positive – actually changed how happy people were. The initial study and follow ups that further explored the question found that after initial shift in happiness after major events such as lottery wins, big promotions or life-altering accidents, people gradually returned to a happiness baseline.

But more recent academic work has found that it’s not quite so simple. First of all, there’s no such thing as a universal happiness “set point.” We all have different baselines of how happy we are. Also, some of us are more apt to respond, either positively or negatively, to major life events.

There are life events that can remove the foundations of happiness – for example, losing your job, causing a significant downtown in your economic status. As I mentioned before, money may not buy happiness, but economic stability is correlated with happiness.

What can make a difference in happiness is what we spend time doing. And in this case, life events can set up the foundations of changes that can either lead to more happiness or less. Generally, anything that leads to more interaction with others generally makes us happier. Anything that leads to social withdrawal tends to make us less happy.

So maybe happiness isn’t so much about how we feel, but rather a product of what we do.

Continuing on this theme, I found a couple of interesting data visualizations by statistician Nathan Yau. The most recent one examined the things that people did at work that made them happy.

If you’re in the legal profession, I have bad news. That ranked highest for stress and low for happiness and meaningfulness. On the other end of the spectrum, hairdressers and manicurists scored high for happiness and low on stress. Construction jobs also seemed to tick the right boxes when it comes to happiness on the job.

For me, the more interesting analysis was one Yau did back in 2018. He looked at a dataset that came from asking 10,000 people what had made them happy in the past 24 hours. Then he parsed the language of those responses to look for the patterns that emerged. The two biggest categories that lead to happiness were “Achievement” and “Affection.”

From this, we start to see some common underpinnings for happiness: doing things for others, achieving the things that are important to us, spending time with our favorite people, bonding over shared experiences.

So let’s get back to the “pursuit of happiness”- something so important to Americans that they enshrined it in the Declaration of Indepedence. But, according to Stanford historian Caroline Winterer, in her 2017 TED talk, that definition of happiness is significantly different than what we currently think of. In her words, that happiness meant, “Every citizen thinking of the larger good, thinking of society, and thinking about the structures of government that would create a society that was peaceful and that would allow as many people as possible to flourish.”

When I think of happiness, that makes more sense. It also matches the other research I shared here. We seem happiest when we’re not focused on ourselves but we’re instead thinking about others. This is especially true when our happiness navel-gazing is measuring how we come up short on happiness when stacked against the unrealistic expectations set by social media.

Like too many things in our society, happiness has morphed from something good and noble into a selfish sense of entitlement.

(Image credit – Creative Commons License – https://www.flickr.com/photos/stevenanichols/2722210623)

Real Life Usually Lives Beyond The Data

There’s an intriguing little show you’ve probably never heard of on Netflix that might be worth checking out. It’s called Travelers and it’s a Canadian produced Sci-Fi show that ran from 2016 to 2018. The only face in it you’re probably recognize is Eric McCormack, the Will from Will and Grace. He also happens to be the producer of the series.

The premise is this – special operatives from the future (the “travelers”) – travel back in time to the present to prevent the collapse of society. They essential “body snatch” everyday people from our present at the exact moment of their death and use their lives as a cover to fulfill their mission.

And that’s not even the interesting part.

The real intrigue of the show comes from the everyday conflicts which come from an imperfect shoe horning of a stranger into the target’s real-world experience. The show runners do a masterful job of weaving this into their storylines: the joy of eating a hamburger, your stomach turning at the thought of drinking actual milk from a cow, calling your “wife” her real name when you haven’t called her that in all the time you’ve known her.  And it’s in this that I discovered an unexpected parallel to our current approach to marketing.

This is a bit of a detour, so bear with me.

In the future, the research team compiles as much as they can about each of the people they’re going to “borrow” for their operatives. The profiles are compiled from social media, public records and everything they can discover from the data available.

But when the “traveler” actually takes over their life, there are no end of surprises and challenges – made up of all the trivial stuff that didn’t make it into the data profile.

You probably see where I’m going with this. When we rely solely on data to try to understand our customers or prospects, there will always be surprises. You can only learn these little quirks and nuances by diving into their lives.

That’s what A.G. Lafley, CEO of Proctor and Gamble from 2000 to 2010 and then again from 20153 to 2015, knew. In a profile on Lafley which Forbes did in 2002, writer Luisa Kroll said,

“Like the monarch in Mark Twain’s A Connecticut Yankee in King Arthurs’ Court, Lafley often makes house calls incognito to find out what’s the minds of his subjects. ‘Too much time was being spent inside Procter & Gamble and not enough outside,’ says Lafley who took over during a turbulent period two years ago. ‘I am a broken record when it comes to saying, ‘We have to focus on the customer.'”

It wasn’t a bad way to run a business. Under Lafley’s guidance, P&G doubled their market cap, making them one of the 10 most valuable companies in the world.

Humans are messy and organic. Data isn’t. Data demands to be categorized, organized and columnized. When we deal with data, we necessarily have to treat it like data. And when we do that, we’re going to miss some stuff – probably a lot of stuff. And almost all of it will be the stuff of our lives, the things that drive behavior, the sparks that light our emotions.

It requires two different ways of thinking. Data sits in our prefrontal lobes, demanding the brain to be relentlessly rational. Data reduces behavior to bits and bytes, to be manipulated by algorithms into plotted trendlines and linear graphs. In fact, automation today can totally remove we humans from the process. Data and A.I. work together to pull the levers and push the buttons on our advertising strategies. We just watch the dashboard.

But there’s another way of thinking – one that skulks down in the brain’s subcortical basement, jammed in the corner between the amygdala and the ventral striatum. It’s here where we stack all the stuff that makes us human; all the quirks and emotions, all our manias and motivations. This stuff is not rational, it’s not logical, it’s just life.

That’s the stuff A.G. Lafley found when he walked out the front door of Proctor and Gamble’s headquarters in Cincinnati and into the homes of their customers. And that’s the stuff the showrunners of Travelers had the insight to include in their narratives.

It’s the stuff that can make us sensational or stupid – often at the same time.