When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?

AI, Creativity and the Last Beatle’s Song

I have never been accused of being a Luddite. Typically, I’m on the other end of the adoption curve – one of the first to adopt a new technology. But when it comes to AI, I am stepping forward gingerly.

Now, my hesitancy notwithstanding, AI is here to stay. In my world, it is well past the tipping point from a thing that exists solely in the domain to tech to a topic of conversation for everyone, from butchers to bakers to candlestick makers. Everywhere I turn now I see those ubiquitous two letters – AI. That was especially true in the last week, with the turmoil around Sam Altman and the “is he fired/isn’t he” drama at OpenAI.

In 1991 Geoffery Moore wrote the book Crossing the Chasm, looking at how technologies are adopted. He explained that it depends on the nature of the technology itself. If it’s a continuation of technology we understand, the adoption follows a fairly straight-forward bell curve through the general population.

But if it’s a disruptive technology – one that we’re not familiar with – then adoption plots itself out on an S-Curve. The tipping point in the middle of that curve where it switches from being skinny to being fat is what he called the “chasm.” Some technologies get stuck on the wrong side of the chasm, never to be adopted by the majority of the market.  Think Google Glass, for example.

There is often a pattern to the adoption of disruptive technologies (and AI definitely fits this description).  To begin with, we find a way to adapt it and use it for the things we’re already doing. But somewhere along the line, innovators grasp the full potential of the technology and apply it in completely new ways, pushing capabilities forward exponentially. And it’s in that push forward where all the societal disruption occurs. Suddenly, all the unintended consequences make themselves known.

This is exactly where we seem to be with AI. Most of us are using it to tweak the things we’ve always done. But the prescient amongst us are starting to look at what might be, and for many of us, we’re doing so with a furrowed brow. We’re worried, and, I suspect, with good reason.

As one example, I’ve been thinking about AI and creativity. As someone who has always dabbled in creative design, media production and writing, this has been top of mind for me. I have often tried to pry open the mystic box that is the creative process.

There are many, creative software developers foremost amongst them, that will tell you that AI will be a game changer when it comes to creating – well – just about anything.

Or, in the case of the last Beatle single to be released, recreating anything. Now and Then, the final Beatles song featuring the Fab Four, was made possible by an AI program created by Peter Jackson’s team for the documentary Get Back. It allowed Paul McCartney, Ringo Starr and their team of producers (headed by George Martin’s son Giles) to separate John Lennon’s vocals from the piano background on a demo tape from 1978.

One last Beatle’s song featuring John Lennon – that should be a good thing – right?  I guess. But there’s a flip side to this.

Let’s take writing, for example. Ask anyone who has written something longer than a tweet or Instagram post. What you start out intending to write is never what you end up with. Somehow, the process of writing takes its own twists and turns, usually surprising even the writer. Even these posts, which average only 700 to 800 words, usually end up going in unexpected directions by the time I place the final period.

Creativity is an iterative process and there are stages in that process. It takes time for it all to  play out. No matter how good my initial idea is, if I simply fed it in an AI black box and hit the “create” button, I don’t know if the outcome would be something I would be happy with.

“But,” you protest, “what about AI taking the drudgery out of the creative process? What if you use it to clean up a photo, or remove background noise from an audio recording (a la the Beatles single). That should free up more time and more options for you to be creative, right?”

That’s promise is certainly what’s being pitched by AI merchants right now. And it makes sense. But it only makes sense at the skinny end of the adoption curve. That’s where we’re at right now, using AI as a new tool to do old jobs. If we think that’s where we’re going to stay, I’m pretty sure we’re being naïve.

I believe creativity needs some sweat. It benefits from a timeline that allows for thinking, and rethinking, over and over again. I don’t believe creativity comes from instant gratification, which is what AI gives us. It comes from iteration that creates the spaces needed for inspiration.

Now, I may be wrong. Perhaps AI’s ability to instantly produce hundreds of variation of an idea will prove the proponents right. It may unleash more creativity than ever. But I still believe we will lose an essential human element in the process that is critical to the act of creation.

Time will tell. And I suspect it won’t take very long.

(Image – The Beatles in WPAP – wendhahai)

Getting from A to Zen

We live in a Type A world. And sometimes, that’s to our detriment.

According to one definition, Type A is achievement oriented, competitive, fast-paced and impatient.

All of that pretty much sums up the environment we live in. But you know what’s hard to find in a Type A world? Your Zen.

I know what you’re thinking — “I didn’t peg Gord for a Zen-seeking kinda guy.” And you’re mostly right. I’m not much for meditation. I’ve tried it — it’s not for me. I’ll be honest. It feels a little too airy-fairy for my overly rational brain.

But I do love cutting the grass. I also love digging holes, retouching photos in Photoshop and cleaning pools. Those are some of the activities where I can find my Zen.

For best-selling author Peggy Orenstein, she found her Zen during COVID – shearing sheep. She shares her journey in her new book, “Unraveling: What I Learned About Life While Shearing Sheep, Dyeing Wool, and Making the World’s Ugliest Sweater.” Orenstein has a breezy, humorous, and self-deprecating style, but there are some deep thoughts here.

In reading the book, I learned it wasn’t the act of shearing where Peggy found her Zen. That’s because sheep shearing is really hard work. You can’t let your mind wander as you wrestle 200 to 300 pounds of Ovis aries, holding a buzzing, super-sharp set of sheers while trying to give it a haircut.

As Orenstein said in a recent interview, “Imagine you were in a ballet with Nureyev and nobody told you the steps. That was what it felt like to reach shearing sheep, you know, for the first time.”

No. You might find a lot of things in that activity, but Zen isn’t likely to be one of them. Orenstein finds her Zen in a less terrifying place, cleaning poop out of the newly shorn wool. She did it the way it’s been done for centuries, in a process called carding. While she carded the wool, she would “Facetime” her dad, who has dementia.

In the interview, she said, “You know, I could just slow down. These ancient arts are slow. They’re very slow and (I would) sit with him and just be next to him and have that time together and sing.”

When I heard her say that in the interview, that hit me. I said, “I have to read this book.” Because I got it. That slowing down, that inner connection, the very act of doing something that seems mindless but isn’t – because doing the act creates the space for your mind to think the thoughts it normally doesn’t have time to do. All that stuff is important.

To me, that’s my Zen.

Now, unless you’re a Mahayana Buddhist, Zen is probably nothing more than a buzzword that made its way westward into our zeitgeist sometime in the last century. I am certainly not a Buddhist, so I am not going to dare tell you the definitive meaning of Zen. I am just going to tell you what my version is.

For me, Zen is a few things:

I think these Zen acts have to contribute to the world in some small way. There has to be something at the end that gives you a sense of accomplishment – the feeling of a job well done.

Maybe that’s why meditation is not for me. There is not a tangible reward at the end. But you can look at a pile of newly shorn fleece or a lawn neatly delineated with the tire tracks of your lawnmower.

The brain must be engaged in a Zen task, but not too much. It needs some space to wander. Repetition helps. As you do the task, your mind eventually shifts to auto-pilot mode. And that’s when I find Zen, as my mind is given the license to explore.

I think this is where step one is important – whatever you’re doing has to be useful enough that you don’t feel that you’re wasting time doing it.

Finally, it helps if your Zen tasks are done in a place where the Type A world doesn’t intrude. You need the space to push back interruption and let your mind wander freely.

I realize there are some of you who will immediately connect with what I’m saying, and others who won’t have a clue. That’s okay.

I think that’s the magic of Zen: it’s not for everyone. But for those of us who understand how important it is, we sometimes need a little reminder to sometimes go seek it. Because in this Type A world, it’s becoming harder to find.

When the News Hits Home

My, how things have changed.

My intention was to write a follow up to last week’s post about Canada’s Bill C-18 and Meta’s banning of news on Facebook. I suppose this is a follow up of sorts. But thanks to Mother Nature – that ofttimes bully – that story was pushed right out of the queue to be replaced with something far more tragic and immediate.

To me, anyway.

I live in Kelowna. Chances are you’ve heard about my home in the last few days. If you haven’t, I can tell you that when I look out my window, all I can see is thick smoke. Which may be a good thing. Last Friday, when I could see, I spent the entire evening watching West Kelowna, across Okanagan Lake from my home, burn in the path of the oncoming McDougall Creek Wildfire. As the flames would suddenly leap towards the sky, you knew that was someone’s home being ignited.

We don’t know how many homes have been lost. The fire has been too active for authorities to have the time to count. We have firefighters and first responders pouring in from around our province to help. . Our Air Quality Index is 11 on a scale of 10, as bad as it can get. Thousands are out of their home. More thousands have their things packed by the door, ready to leave at a moment’s notice. We’re one of those.

But that’s enough about the fire. This post is about our weird relationship with the news.

When something like this happens, you have a very real, very visceral need to know what’s going on. For those of us that live here in British Columbia, the news has hit home in a way we could never imagine. A few posts ago, I said it might be healthier for me to ignore the news, because it’s always alarming and very seldom relevant to me. Well, those words are now coming back to haunt me.

This disaster has thrown our reliance on Facebook for new into stark relief. This last Friday, Canada’s Transportation Minster, Pablo Rodriguez, asked Meta to reverse its current ban on news, “We’ve seen that, throughout this emergency, Canadians have not had access to the crucial information they need. So, I ask Meta to reverse its decision, allow Canadians to have access to news on their platforms.”

But there’s another dimension to this that’s a bit more subtle yet even more frightening. It goes to the heart of how we handle crisis. I think you necessarily must “zoom in,” performing some type of terrible triage in your mind to be able to imagine the unimaginable. As the winds shift the fire away from your home, there’s relief. But other homes now lie in the path of the fire. In your head, you know that, but emotionally you can’t help but feel a lift. It’s not noble, but it’s human.

So let’s “zoom out” – a lot. We’re not the only ones this is happening to. This is a global crisis. Twenty-six thousand people are evacuated on the Spanish island of Tenerife. A friend of mine, who’s an airline pilot, was one week ago volunteering to fly people out of Maui who had lost their homes in the tragic Lahaina fire.

Take a look at Nasa’s FIRMS (Fire Information for Resource Management) website, which gives a global map of all hotspots from wildfires burning. I’ve set this link to wildfire activity in the last 7 days.

Scary as hell, right?

But can we actually process that, in a way that lets us move forward and start coping with this massive issue? Is it enough to change our behaviors in the way we must to finally start addressing climate change?

In a recent article on BBC.com, Richard Fisher talks about “Construal level theory” – which says that the greater the psychological distance there is between the news and your life, the less likely it is to make you change your behavior. For me, the psychological distance between myself and climate change is roughly 1 kilometer (just over half a mile) as the crow flies. That’s how far it is from my house to the nearest evacuation alert area.

It doesn’t get much closer than that.  But will we change? Will anything change?

I’m not so sure. We’ve been through this before. Exactly 20 years ago, the Okanagan Mountain wildfire raged through Kelowna, displacing over 30,000 people and destroying 239 homes. It was a summer much like this, at the time the driest summer on record. This year, we have smashed that record, as we have many times since that fire. Once we picked up, rebuilt our homes and got back to life, nothing really changed.

And now, here we are again. Let’s hope that this time is different.

It’s All in How You Spin It

I generally get about 100 PR pitches a week. And I’m just a guy who writes a post on tech, people and marketing now and then. I’m not a journalist. I’m not even gainfully employed by anyone. I am just one step removed — thanks to the platform  MediaPost has provided me — from “some guy” you might meet at your local coffee shop.

But still, I get 100 PR pitches a week. Desperation for coverage is the only reason I can think of for this to be so. 99.9999% of the time, they go straight to my trash basket. And the reason they do is that they’re almost never interesting. They are — well, they’re pitches for free exposure.

Now, the average pitch, even if it isn’t interesting, should at least try to match the target’s editorial interest. It should be in the strike zone, so to speak.

Let’s do a little postmortem on one I received recently. It was titled “AI in Banking.” Fair enough. I have written a few posts on AI. Specifically, I have written a few posts on my fear of AI.

I have also written about my concerns about misuse of data. When it comes to the nexus between AI and data, I would be considered more than a little pessimistic. So, something linking AI and banking did pique my interest, but not in a good way. I opened the email.

There, in the first paragraph, I read this: “AI is changing how banks provide personalized recommendations and insights based on enriched financial data offering tailored suggestions, such as optimizing spending, suggesting suitable investment opportunities, or identifying potential financial risks.”

This, for those of you not familiar with “PR-ese,” is what we in the biz call “spin.” Kellyanne Conway once called it — more euphemistically — an alternative fact.

Let me give you an example. Let’s say that during the Tour de France half the Peloton crashes and bicyclists get a nasty case of road rash. A PR person would spin that to say that “Hundreds of professional cyclists discover a new miracle instant exfoliation technique from the South of France.”

See? It’s not a lie, it’s just an alternative fact.

Let’s go on. The second paragraph of the pitch continued: “Bud, a company that specializes in data intelligence is working with major partners across the country (Goldman Sachs, HSBC, 1835i, etc.) to categorize and organize financial information and data so that users are empowered to make informed decisions and gain a deeper understanding of their financial situation.”

Ah — we’re now getting closer to the actual fact. The focus is beginning to switch from the user, empowered to make better financial decisions thanks to AI, to what is actually happening: a data marketplace being built on the backs of users for sale to corporate America.

Let’s now follow the link to Bud’s website. There, in big letters on the home page, you read:

“Turn transactional data into real-time underwriting intelligence

Bud’s AI platform and data visualizations help lenders evaluate risk, reduce losses and unlock hidden revenue potential.”

Bingo. This is not about users, at least, not beyond using them as grist in a data mill. This is about slipping a Trojan Horse into your smartphone in the form of an app and hoovering your personal data up to give big banks an intimate glimpse into not just your finances, but also your thinking about those finances. As you bare your monetary soul to this helpful “Bud,” you have established a direct pipeline to the very institutions that hold your future in their greedy little fingers. You’re giving an algorithm everything it needs to automatically deny you credit.

This was just one pitch that happened to catch my eye long enough to dig a little deeper. But it serves as a perfect illustration of why I don’t trust big data or AI in the hands of for-profit corporations.

And that will continue to be true — no matter how you PR pros spin it.

No News is Good News

I’m trying not to pay too much attention to the news. This is partly because I’m exhausted by the news, and partly because of the sad state of journalism today.

This isn’t just a “me” thing. Almost everyone I talk to says they’re trying to find coping mechanisms to deal with the news. The News industry – and its audience – has gone from being an essential part of a working democracy to something that is actually bad for you.  In an online essay from 4 years, Swiss author Rolf Dobelli equates news consumption to a bad diet:

“(translated from its original German) News is to the mind what sugar is to the body. News is appetizing, easily digestible and at the same time highly harmful. The media feeds us morsels of trivial stories, tidbits that by no means satisfy our hunger for knowledge. Unlike with books and long, well-researched, long articles, there is no saturation when consuming news. We can devour unlimited amounts of messages; they remain cheap sugar candies. As with sugar, the side effects only show up with a delay.”

Rolf Dobelli, 2019

This alarming state is due to the fact that the News (in the US) is supported by advertising, which means it has a ravenous appetite for eyeballs. Because of this, it is highly profitable to make news addictive.

This creates a state, as Dobelli points out, where even though the news is highly inflammatory, like a constantly jangling alarm bell, almost all the news we consume is irrelevant to our daily lives. While the news we watch pushes all our hot buttons, it doesn’t serve a useful purpose. In fact, it does the exact opposite: it leads to chronic mental and physical ill-being and may cause us to start ignoring the warning signs we should be paying attention to.

A study last year (McLaughlin, Gotlieb and Mills) found ties between problematic news consumption and mental ill-being. The study found that 16.5% of 1,100 people polled in an online survey showed signs of “severely problematic” news consumption, which led them to focus less on school, work and family, and contributed to an inability to sleep.

Dobelli’s essay goes even further, pointing a finger at excessive news consumption as the cause of a list of issues including cognitive errors, inhibiting deeper thinking, wasting time, killing creativity, making us more passive and even wiring our brains for addiction in a manner similar to drugs.

All these negative side effects come from chronic stress – a constant and pervasive alarmed state that excessive news consumption puts our brains into. And if you thought Dobelli’s list was scary, wait until you see the impact of chronic stress! It actually attacks the brain by releasing excessive amounts of cortisol and restricting the uptake of serotonin, which can increase inflammation, lead to depression, shrink your hippocampus and impact your memory, make it difficult to sleep and impair your ability to think rationally.

To put a new twist on an old saying, “No news is good news.”

But let’s put aside for a moment the physical and mental toll that news takes on us. Even if none of that were true, our constant diet of bad news can also lead to something known as “alarm fatigue.”

Alarm fatigue is essentially our response to the proverbial boy who calls wolf. After several false alarms, we stop paying attention. And on that one time when we should be paying attention, we are caught with our guard down.

There is one other problem with our news diet: it oversimplifies complex problems into simple sound bites. Thomas Jefferson said, “An educated citizenry is a vital requisite for our survival as a free people.” But when the news abdicates its role as an informer to pursue profit as entertainment, it is no longer educating us. It is pandering to us by stuffing bite sized opinion pieces that reinforce our beliefs – right or wrong. We are never challenged to examine our beliefs or explore the complexity of the wicked problems that confront us. Real journalism has been replaced by profitable punditry.

All this leaves us with a choice. Until the News industry cleans up its act (I’m not holding my breath), you’re likely far better off to ignore it. Or at least, ignore the profit driven platforms that are hungry for eyeballs. Stay informed by turning to books, long articles and true investigative journalism. That’s what I’m going to start doing.

Failing all that, just think about things. I understand it’s good for you.

Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)