Uncommon Sense

Let’s talk about common sense.

“Common sense” is one of those underpinnings of democracy that we take for granted. Basically, it hinges on this concept: the majority of people will agree that certain things are true. Those things are then defined as “common sense.” And common sense becomes our reference point for what is right and what is wrong.

But what if the very concept of common sense isn’t true? That was what researchers Duncan Watts and Mark Whiting set out to explore.

Duncan Watts is one of my favourite academics. He is a computational social scientist at the University of Pennsylvania. I’m fascinated by network effects in our society, especially as they’re now impacted by social media. And that pretty much describes Watt’s academic research “wheelhouse.” 

According to his profile he’s “interested in social and organizational networks, collective dynamics of human systems, web-based experiments, and analysis of large-scale digital data, including production, consumption, and absorption of news.”

Duncan, you had me at “collective dynamics.”

 I’ve cited his work in several columns before, notably his deconstruction of marketing’s ongoing love affair with so-called influencers. A previous study from Watts shot several holes in the idea of marketing to an elite group of “influencers.”

Whiting and Watts took 50 claims that would seem to fall into the category of common sense. They ranged from the obvious (“a triangle has three sides”) to the more abstract (“all human beings are created equal”). They then recruited an online panel of participants to rate whether the claims were common sense or not. Claims based on science were more likely to be categorized as common sense. Claims about history or philosophy were less likely to be identified as common sense.

What did they find? Well, apparently common sense isn’t very common. Their report says, “we find that collective common sense is rare: at most a small fraction of people agree on more than a small fraction of claims.” Less than half of the 50 claims were identified as common sense by at least 75% of respondents.

Now, I must admit, I’m not really surprised by this. We know we are part of a pretty polarized society. It no shock that we share little in the way of ideological common ground.

But there is a fascinating potential reason why common sense is actually quite uncommon: we define common sense based on our own realities, and what is real for me may not be real for you. We determine our own realities by what we perceive to be real, and increasingly, we perceive the “real” world through a lens shaped by technology and media – both traditional and social.

Here is where common sense gets confusing. Many things – especially abstract things – have subjective reality. They are not really provable by science. Take the idea that all human beings are created equal. We may believe that, but how do we prove it? What does “equal” mean?

So when someone appeals to our common sense (usually a politician) just what are they appealing to? It’s not a universally understood fact that everyone agrees on. It’s typically a framework of belief that is probably only agreed on by a relatively small percent of the population. This really makes it a type of marketing, completely reliant on messaging and targeting the right market.

Common sense isn’t what it once was. Or perhaps it never was. Either common or sensible.

Feature image: clemsonunivlibrary

Talking Out Loud to Myself

I talk to myself out loud. Yes, full conversations, questions and answers, even debates — I can do everything all by myself.

I don’t do it when people are around. I’m just not that confident in my own cognitive quirks. It doesn’t seem, well… normal, you know?

But between you and me, I do it all the time. I usually walk at the same time. For me, nothing works better than some walking and talking with myself to work out particularly thorny problems.

Now, if I was using Google to diagnose myself, it would be a coin toss whether I was crazy or a genius. It could go either way.  One of the sites I clicked to said it could be a symptom of psychosis. But another site pointed to a study at Bangor University (2012 – Kirkham, Breeze, Mari-Beffa) that indicates that talking to yourself out loud may indicate a higher level of intelligence. Apparently, Nikola Tesla talked to himself during lightning storms. Of course, he also had a severe aversion to women who wore pearl earrings. So the jury may still be out on that one.

I think pushing your inner voice through the language processing center of your brain and actually talking out loud does something to crystallize fleeting thoughts. One of the researchers of the Bangor study, Paloma Mari-Beffa, agrees with this hypothesis:

“Our results demonstrated that, even if we talk to ourselves to gain control during challenging tasks, performance substantially improves when we do it out loud.”

Mari-Beffa continues,

“Talking out loud, when the mind is not wandering, could actually be a sign of high cognitive functioning. Rather than being mentally ill, it can make you intellectually more competent. The stereotype of the mad scientist talking to themselves, lost in their own inner world, might reflect the reality of a genius who uses all the means at their disposal to increase their brain power.”

When I looked for any academic studies to support the value of talking out loud to yourself, I found one (Huang, Carr and Cao, 2001) that was obviously aimed at neuroscientists, something I definitely am not. But after plowing through it, I think it said the brain does work differently when you say things out loud.

Another one (Gruber, von Cramon 2001) even said that when we artificially suppress our strategy of verbalizing our thoughts, our brains seem to operate the same way that a monkey’s brain would, using different parts of the brain to complete different tasks (e.g., visual, spatial or auditory). But when allowed to talk to themselves, humans tend to use a verbalizing strategy to accomplish all kinds of tasks. This indicates that verbalization seems to be the preferred way humans work stuff out. It gives guide rails and a road map to our human brain.

But if we’ve learned anything about human brains, we’ve learned that they don’t all work the same way. Are some brains more likely to benefit from the owner talking to themselves out loud, for instance? Take introverts, for example. I am a self-confessed introvert. And I talk to myself. So I had to ask, are introverts more likely to have deep, meaningful conversations with themselves?

If you’re not an introvert, let me first tell you that introverts are generally terrible at small talk. But — if I do say so myself — we’re great at “big” talk. We like to go deep in our conversations, generally with just one other person. Walking and talking with someone is an introvert’s idea of a good time. So walking and talking with yourself should be the introvert’s holy grail.

While I couldn’t find any empirical evidence to support this correlation between self-talk and introversion, I did find a bucketful of sites about introverts noting that it’s pretty common for us to talk to ourselves. We are inclined to process information internally before we engage externally, so self-talk becomes an important tool in helping us to organize our thoughts.

Remember, external engagements tend to drain the battery of an introvert, so a little power management before the engagement to prevent running out of juice midway through a social occasion makes sense.

I know this is all a lot to think about. Maybe it would help to talk it out — by yourself.

Feature image by Brecht Bug – Flickr – Creative Commons

Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

We SHOULD Know Better — But We Don’t

“The human mind is both brilliant and pathetic.  Humans have built hugely complex societies and technologies, but most of us don’t even know how a toilet works.”

– from The Knowledge Illusion: Why We Never Think Alone” by Steven Sloman and Philip Fernback.

Most of us think we know more than we do — especially about things we really know nothing about. This phenomenon is called the Dunning-Kruger Effect. Named after psychologists Justin Kruger and David Dunning, this bias causes us to overestimate our ability to do things that we’re not very good at.

That’s the basis of the new book “The Knowledge Illusion: Why We Never Think Alone.” The basic premise is this: We all think we know more than we actually do. Individually, we are all “error prone, sometimes irrational and often ignorant.” But put a bunch of us together and we can do great things. We were built to operate in groups. We are, by nature, herding animals.

This basic human nature was in the back of mind when I was listening to an interview with Es Devlin on CBC Radio. Devlin is self-described as an artist and stage designer.  She was the vision behind Beyonce’s Renaissance Tour, U2’s current run at The Sphere in Las Vegas, and the 2022 Superbowl halftime show with Dr. Dre, Snoop Dogg, Eminem and Mary J. Blige.

When it comes to designing a visually spectacular experience,  Devlin has every right to be a little cocky. But even she admits that every good idea doesn’t come directly from her. She said the following in the interview (it’s profound, so I’m quoting it at length):

“I learned quite quickly in my practice to not block other people’s ideas — to learn that, actually,  other people’s ideas are more interesting than my own, and that I will expand by absorbing someone else’s idea.

“The real test is when someone proposes something in a collaboration that you absolutely, [in] every atom of your body. revile against. They say, ‘Why don’t we do it in bubblegum pink?’ and it was the opposite of what you had in mind. It was the absolute opposite of anything you would dream of doing.

“But instead of saying, ‘Oh, we’re not doing that,’  you say ‘OK,’ and you try to imagine it. And then normally what will happen is that you can go through the veil of the pink bubblegum suggestion, and you will come out with a new thing that you would never have thought of on your own.

“Why? Because your own little batch of poems, your own little backpack of experience. does not converge with that other person, so you are properly meeting not just another human being, but everything that led up to them being in that room with you. “

From Interview with Tom Powers on Q – CBC Radio, March 18, 2024

We live in a culture that puts the individual on a pedestal.  When it comes to individualistic societies, none are more so than the United States (according to a study by Hofstede Insights).  Protection of personal rights and freedom are the cornerstone of our society (I am Canadian, but we’re not far behind on this world ranking of individualistic societies). The same is true in the U.K. (where Devlin is from), Australia, the Netherlands and New Zealand.

There are good things that come with this, but unfortunately it also sets us up as the perfect targets for the Dunning-Kruger effect. This individualism and the cognitive bias that comes with it are reinforced by social media. We all feel we have the right to be heard — and now we have the platforms that enable it.

With each post, our unshakable belief in our own genius and infallibility is bulwarked by a chorus of likes from a sycophantic choir who are jamming their fingers down on the like button. Where we should be cynical of our own intelligence and knowledge, especially about things we know nothing about, we are instead lulled into hiding behind dangerous ignorance.

What Devlin has to say is important. We need to be mindful of our own limitations and be willing to ride on the shoulders of others so we can see, know and do more. We need to peek into the backpack of others to see what they might have gathered on their own journey.

(Feature Image – Creative Commons – https://www.flickr.com/photos/tedconference/46725246075/)

My Award for the Most Human Movie of the Year

This year I watched the Oscars with a different perspective. For the first time, I managed to watch nine of the 10 best picture nominees (My one exception was “The Zone of Interest”) before Sunday night’s awards. And for each, I asked myself this question, “Could AI have created this movie?” Not AI as it currently stands, but AI in a few years, or perhaps a few decades.

To flip it around, which of the best picture nominees would AI have the hardest time creating? Which movie was most dependent on humans as the creative engine?

AI’s threat to the film industry is on the top of everyone’s mind. It has been mentioned in pretty much every industry awards show. That threat was a major factor in the strikes that shut down Hollywood last year. And it was top of mind for me, as I wrote about it in my post last week.

So Sunday night, I watched as the 10 nominated films were introduced, one by one. And for each, I asked myself, “Is this a uniquely human film?” To determine that, I had to ask myself, “What sets human intelligence apart from artificial intelligence? What elements in the creative process most rely on how our brains work differently from a computer?”

For me, the answer was not what I expected. Using that yardstick, the winner was “Barbie.”

The thing that’s missing in artificial intelligence, for good and bad, is emotion. And from emotion comes instinct and intuition.

Now, all the films had emotion, in spades. I can’t remember a year where so many films driven primarily by character development and story were in the running. But it wasn’t just emotion that set “Barbie” apart; it was the type of emotion.

Some of the contenders, including “Killers of the Flower Moon” and “Oppenheimer,” packed an emotional wallop, but it was a wallop with one note. The emotional arc of these stories was predictable. And things that are predictable lend themselves to algorithmic discovery. AI can learn to simulate one-dimensional emotions like fear, sorrow, or disgust — and perhaps even love.

But AI has a much harder time understanding emotions that are juxtaposed and contradictory. For that, we need the context that comes from lived experience.

AI, for example, has a really tough time understanding irony and sarcasm. As I have written before, sarcasm requires some mental gymnastics that is difficult for AI to replicate.

So, if we’re looking for backwater of human cognition that so far has escaped the tidal wave of AI bearing down on it, we could well find it in satire and sarcasm.

Barbie wasn’t alone in employing satire. “Poor Things” and “American Fiction” also used social satire as the backbone of their respective narratives.

What “Barbie” director Greta Gerwig did, with exceptional brilliance, was bring together a delicately balanced mix of contradictory emotions for a distinctively human experience. Gerwig somehow managed to tap into the social gestalt of a plastic toy to create a joyful, biting, insightful and ridiculous creation that never once felt inauthentic. It lived close to our hearts and was lodged in a corner of our brains that defies algorithmic simulation. The only way to create something like this was to lean into your intuition and commit fully to it. It was that instinct that everyone bought into when they came onboard the project.

Not everyone got “Barbie.” That often happens when you double down on your own intuition. Sometimes it doesn’t work — but sometimes it does. “Barbie” was the highest grossing movie of last year. Based on that endorsement, the movie-going public got something the voters of the Academy didn’t: the very human importance of Gerwig’s achievement. If you watched the Oscars on Sunday night, the best example of that importance was when Ryan Gosling committed completely to his joyous performance of “I’m Just Ken,” which generated the biggest positive response of the entire evening for the audience.

 I can’t imagine an algorithm ever producing a creation like “Barbie.”

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.

OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.

When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?

AI, Creativity and the Last Beatle’s Song

I have never been accused of being a Luddite. Typically, I’m on the other end of the adoption curve – one of the first to adopt a new technology. But when it comes to AI, I am stepping forward gingerly.

Now, my hesitancy notwithstanding, AI is here to stay. In my world, it is well past the tipping point from a thing that exists solely in the domain to tech to a topic of conversation for everyone, from butchers to bakers to candlestick makers. Everywhere I turn now I see those ubiquitous two letters – AI. That was especially true in the last week, with the turmoil around Sam Altman and the “is he fired/isn’t he” drama at OpenAI.

In 1991 Geoffery Moore wrote the book Crossing the Chasm, looking at how technologies are adopted. He explained that it depends on the nature of the technology itself. If it’s a continuation of technology we understand, the adoption follows a fairly straight-forward bell curve through the general population.

But if it’s a disruptive technology – one that we’re not familiar with – then adoption plots itself out on an S-Curve. The tipping point in the middle of that curve where it switches from being skinny to being fat is what he called the “chasm.” Some technologies get stuck on the wrong side of the chasm, never to be adopted by the majority of the market.  Think Google Glass, for example.

There is often a pattern to the adoption of disruptive technologies (and AI definitely fits this description).  To begin with, we find a way to adapt it and use it for the things we’re already doing. But somewhere along the line, innovators grasp the full potential of the technology and apply it in completely new ways, pushing capabilities forward exponentially. And it’s in that push forward where all the societal disruption occurs. Suddenly, all the unintended consequences make themselves known.

This is exactly where we seem to be with AI. Most of us are using it to tweak the things we’ve always done. But the prescient amongst us are starting to look at what might be, and for many of us, we’re doing so with a furrowed brow. We’re worried, and, I suspect, with good reason.

As one example, I’ve been thinking about AI and creativity. As someone who has always dabbled in creative design, media production and writing, this has been top of mind for me. I have often tried to pry open the mystic box that is the creative process.

There are many, creative software developers foremost amongst them, that will tell you that AI will be a game changer when it comes to creating – well – just about anything.

Or, in the case of the last Beatle single to be released, recreating anything. Now and Then, the final Beatles song featuring the Fab Four, was made possible by an AI program created by Peter Jackson’s team for the documentary Get Back. It allowed Paul McCartney, Ringo Starr and their team of producers (headed by George Martin’s son Giles) to separate John Lennon’s vocals from the piano background on a demo tape from 1978.

One last Beatle’s song featuring John Lennon – that should be a good thing – right?  I guess. But there’s a flip side to this.

Let’s take writing, for example. Ask anyone who has written something longer than a tweet or Instagram post. What you start out intending to write is never what you end up with. Somehow, the process of writing takes its own twists and turns, usually surprising even the writer. Even these posts, which average only 700 to 800 words, usually end up going in unexpected directions by the time I place the final period.

Creativity is an iterative process and there are stages in that process. It takes time for it all to  play out. No matter how good my initial idea is, if I simply fed it in an AI black box and hit the “create” button, I don’t know if the outcome would be something I would be happy with.

“But,” you protest, “what about AI taking the drudgery out of the creative process? What if you use it to clean up a photo, or remove background noise from an audio recording (a la the Beatles single). That should free up more time and more options for you to be creative, right?”

That’s promise is certainly what’s being pitched by AI merchants right now. And it makes sense. But it only makes sense at the skinny end of the adoption curve. That’s where we’re at right now, using AI as a new tool to do old jobs. If we think that’s where we’re going to stay, I’m pretty sure we’re being naïve.

I believe creativity needs some sweat. It benefits from a timeline that allows for thinking, and rethinking, over and over again. I don’t believe creativity comes from instant gratification, which is what AI gives us. It comes from iteration that creates the spaces needed for inspiration.

Now, I may be wrong. Perhaps AI’s ability to instantly produce hundreds of variation of an idea will prove the proponents right. It may unleash more creativity than ever. But I still believe we will lose an essential human element in the process that is critical to the act of creation.

Time will tell. And I suspect it won’t take very long.

(Image – The Beatles in WPAP – wendhahai)

Getting from A to Zen

We live in a Type A world. And sometimes, that’s to our detriment.

According to one definition, Type A is achievement oriented, competitive, fast-paced and impatient.

All of that pretty much sums up the environment we live in. But you know what’s hard to find in a Type A world? Your Zen.

I know what you’re thinking — “I didn’t peg Gord for a Zen-seeking kinda guy.” And you’re mostly right. I’m not much for meditation. I’ve tried it — it’s not for me. I’ll be honest. It feels a little too airy-fairy for my overly rational brain.

But I do love cutting the grass. I also love digging holes, retouching photos in Photoshop and cleaning pools. Those are some of the activities where I can find my Zen.

For best-selling author Peggy Orenstein, she found her Zen during COVID – shearing sheep. She shares her journey in her new book, “Unraveling: What I Learned About Life While Shearing Sheep, Dyeing Wool, and Making the World’s Ugliest Sweater.” Orenstein has a breezy, humorous, and self-deprecating style, but there are some deep thoughts here.

In reading the book, I learned it wasn’t the act of shearing where Peggy found her Zen. That’s because sheep shearing is really hard work. You can’t let your mind wander as you wrestle 200 to 300 pounds of Ovis aries, holding a buzzing, super-sharp set of sheers while trying to give it a haircut.

As Orenstein said in a recent interview, “Imagine you were in a ballet with Nureyev and nobody told you the steps. That was what it felt like to reach shearing sheep, you know, for the first time.”

No. You might find a lot of things in that activity, but Zen isn’t likely to be one of them. Orenstein finds her Zen in a less terrifying place, cleaning poop out of the newly shorn wool. She did it the way it’s been done for centuries, in a process called carding. While she carded the wool, she would “Facetime” her dad, who has dementia.

In the interview, she said, “You know, I could just slow down. These ancient arts are slow. They’re very slow and (I would) sit with him and just be next to him and have that time together and sing.”

When I heard her say that in the interview, that hit me. I said, “I have to read this book.” Because I got it. That slowing down, that inner connection, the very act of doing something that seems mindless but isn’t – because doing the act creates the space for your mind to think the thoughts it normally doesn’t have time to do. All that stuff is important.

To me, that’s my Zen.

Now, unless you’re a Mahayana Buddhist, Zen is probably nothing more than a buzzword that made its way westward into our zeitgeist sometime in the last century. I am certainly not a Buddhist, so I am not going to dare tell you the definitive meaning of Zen. I am just going to tell you what my version is.

For me, Zen is a few things:

I think these Zen acts have to contribute to the world in some small way. There has to be something at the end that gives you a sense of accomplishment – the feeling of a job well done.

Maybe that’s why meditation is not for me. There is not a tangible reward at the end. But you can look at a pile of newly shorn fleece or a lawn neatly delineated with the tire tracks of your lawnmower.

The brain must be engaged in a Zen task, but not too much. It needs some space to wander. Repetition helps. As you do the task, your mind eventually shifts to auto-pilot mode. And that’s when I find Zen, as my mind is given the license to explore.

I think this is where step one is important – whatever you’re doing has to be useful enough that you don’t feel that you’re wasting time doing it.

Finally, it helps if your Zen tasks are done in a place where the Type A world doesn’t intrude. You need the space to push back interruption and let your mind wander freely.

I realize there are some of you who will immediately connect with what I’m saying, and others who won’t have a clue. That’s okay.

I think that’s the magic of Zen: it’s not for everyone. But for those of us who understand how important it is, we sometimes need a little reminder to sometimes go seek it. Because in this Type A world, it’s becoming harder to find.