Can OpenAI Make Searching More Useful?

As you may have heard, OpenAI is testing a prototype of a new search engine called SearchGPT. A press release from July 25 notes: “Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier.”

I’ve been waiting for this for a long time: search that moves beyond relevance to usefulness.  It was 14 years ago that I said this in an interview with Aaron Goldman regarding his book “Everything I Know About Marketing I Learned from Google”:“Search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness. That’s why I believe apps are the next flavor of search, little dedicated helpers that allow us to do something with the information. The information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

I’ve felt for almost two decades that the days of search as a destination were numbered. For over 30 years now (Archie, the first internet search engine, was created in 1990), when we’re looking for something online, we search, and then we have to do something with what we find on the results page. Sometimes, a single search is enough — but often, it isn’t. For many of our intended end goals, we still have to do a lot of wading through the Internet’s deep end, filtering out the garbage, picking up the nuggets we need and then assembling those into something useful.

I’ve spent much of those past two decades pondering what the future of search might be. In fact, my previous company wrote a paper on it back in 2007. We were looking forward to what we thought might be the future of search, but we didn’t look too far forward. We set 2010 as our crystal ball horizon. Then we assembled an all-star panel of search design and usability experts, including Marissa Mayer, who was then Google’s vice president of search user experience and interface design, and Jakob Nielsen, principal of the Nielsen Norman Group and the web’s best known usability expert. We asked them what they thought search would look like in three years’ time.

Even back then, almost 20 years ago, I felt the linear presentation of a results page — the 10 blue links concept that started search — was limiting. Since then, we have moved beyond the 10 blue links. A Google search today for the latest IPhone model (one of our test queries in the white paper) actually looks eerily similar to the mock-up we did for what a Google search might look like in the year 2010. It just took Google 14 extra years to get there.

But the basic original premise of search is still there: Do a query, and Google will try to return the most relevant results. If you’re looking to buy an iPhone, it’s probably more useful, mainly due to sponsored content. But it’s still well short of the usefulness I was hoping for.

It’s also interesting to see what directions search has (and hasn’t) taken since then. Mayer talked a lot about interacting with search results. She envisioned an interface where you could annotate and filter your results: “I think that people will be annotating search results pages and web pages a lot. They’re going to be rating them, they’re going to be reviewing them. They’re going to be marking them up, saying ‘I want to come back to this one later.’”

That never really happened. The idea of search as a sticky and interactive interface for the web sort of materialized, but never to the extent that Mayer envisioned.

From our panel, it was Nielsen’s crystal ball that seemed to offer the clearest view of the future: “I think if you look very far ahead, you know 10, 20, 30 years or whatever, then I think there can be a lot of things happening in terms of natural language understanding and making the computer more clever than it is now. If we get to that level then it may be possible to have the computer better guess at what each person needs without the person having to say anything, but I think right now, it is very difficult.”

Nielsen was spot-on in 2007. It’s exactly those advances in natural language processing and artificial intelligence that could allow ChatGPT to now move beyond the paradigm of the search results page and move searching the web into something more useful.

A decade and a half ago, I envisioned an ecosystem of apps that could bridge the gap between what we intended to do and the information and functionality that could be found online.  That’s exactly what’s happening at OpenAI — a number of functional engines powered by AI, all beneath a natural language “chat” interface.

At this point, we still have to “say” what we want in the form of a prompt, but the more we use ChatGPT (or any AI interface) the better it will get to know us. In 2007, when we wrote our white paper on the future of search, personalization was what we were all talking about. Now, with ChatGPT, personalization could come back to the fore, helping AI know what we want even if we can’t put it into words.

As I mentioned in a previous post, we’ll have to wait to see if SearchGPT can make search more useful, especially for complex tasks like planning a vacation, making a major purchase onr planning a big event.

But I think all the pieces are there. The monetization siloes that dominate the online landscape will still prove a challenge to getting all the way to our final destination, but SearchGPT could make the journey faster and a little less taxing.

Note: I still have a copy of our 2007 white paper if anyone is interested. Just email me (email in the contact us page), give me your email and I’ll send you a copy.

The Adoption of A.I.

Recently, I was talking to a reporter about AI. She was working on a piece about what Apple’s integration of AI into the latest iOS (cleverly named Apple Intelligence) would mean for its adoption by users. Right at the beginning, she asked me this question, “What previous examples of human adoption of tech products or innovations might be able to tell us about how we will fit (or not fit) AI into our daily lives?”

That’s a big question. An existential question, even. Luckily, she gave me some advance warning, so I had a chance to think about it.  Even with the heads up, my answer was still well short of anything resembling helpfulness. It was, “I don’t think we’ve ever dealt with something quite like this. So, we’ll see.”

Incisive? Brilliant? Erudite? No, no and no.

But honest? I believe so.

When we think in terms of technology adoption, it usually falls into two categories: continuous and discontinuous. Continuous innovation simply builds on something we already understand. It’s adoption that follows a straight line, with little risk involved and little effort required. It’s driving a car with a little more horsepower, or getting a smartphone with more storage.

Discontinuous innovation is a different beast. It’s an innovation that displaces what went before it. In terms of user experience, it’s a blank slate, so it requires effort and a tolerance for risk to adopt it. This is the type of innovation that is adopted on a bell curve, first identified by American sociologist Everett Rogers in 1962. The acceptance of these new technologies spreads along a timeline defined by the personalities of the marketplace. Some are the type to try every new gadget, and some hang on to the tried and true for as long as they possibly can. Most of us fall somewhere in between.

As an example, think about going from driving a tradition car to an electric vehicle. The change from one to the other requires some effort. There’s a learning curve involved. There’s also risk. We have no baseline of experience to measure against. Some will be ahead of the curve and adopt early. Some will drive their gas clunker until it falls apart.

Falling into this second category of discontinuous innovation, but different by virtue of both the nature of the new technology and the impact it wields, are a handful of innovations that usher in a completely different paradigm. Think of the introduction of electrical power distribution in the late 19th century, the introduction of computers in the second half of the 20th century, or the spread of the internet in the 21st Century.

Each of these was foundational, in that they sparked an explosion of innovation that wouldn’t have been possible if it were not for the initial innovation. These innovations not only change all the rules, they change the very game itself. And because of that, they impact society at a fundamental level. When these types of innovations come along, your life will change whether you choose to adopt the technology or not. And it’s these types of technological paradigm shifts that are rife with unintended consequences.

If I was trying to find a parallel for what AI means for us, I would look for it amongst these examples. And that presents a problem when we pull out our crystal ball and try to peer ahead at what might be. We can’t know. There’s just too much in flux – too many variables to compute with any accuracy. Perhaps we can project forward a few months or a year at the most, based on what we know today. But trying to peer any further forward is a fool’s game. Could you have anticipated what we would be doing on the Internet in 2024 when the first BBS (Bulletin Board System) was introduced in Chicago in 1978?

A.I. is like these previous examples, but it’s also different in one fundamental way. All these other innovations had humans at the switch. Someone needed to turn on the electrical light, boot up the computer or log on to the internet. At this point, we are still “using” A.I., whether it’s as an add-on in software we’re familiar with, like Adobe Photoshop, or a stand-alone app like ChatGPT, but generative A.I.’s real potential can only be discovered when it slips from the grasp of human control and starts working on its own, hidden under some algorithmic hood, safe from our meddling human hands.

We’ve never dealt with anything like this before. So, like I said, we’ll see.

Navigating Grief: Ouija Boards and AI Communication with the Dead

When I was growing up, we had a Ouija board in our home. But no one was allowed to use it, so it was hidden in the bottom of a forgotten closet. It was, according to my mother, “a thing of the devil.”

At this point, you might have two questions: what is a Ouija board, and if it was evil, why did we have one in the first place?

Ouija boards first gained popularity with the rise of the spiritualist movement in the late 1800s. They were also called spirit boards or witch boards. By the turn of the last century, it had become a parlor game, marketed by the Kennard Novelty Company.

The Ouija board had the alphabet, numbers, the words “yes” and “no” and various other graphics and symbols printed on it. There is a “planchette” – a small heart shaped piece of wood, generally on felt tipped pegs. The planchette was placed in the middle of the board and those seated around the board would place their fingers on the planchette. Then, the planchette, seemingly moving of its own accord, would spell out answers to questions from the group. Typically, the board was supposedly used to communicate with spirits of those who had passed on, speaking through the board from the other side.

That brings us to why we had the board. My father died suddenly in 1962 at the age of 27. I was one year old when he passed away. My mother was just 24 and, in the span of a disappearing heartbeat, became both a widow and a single mother. My father did everything for my mom. And now, suddenly, he was gone.

Mom, as you may have guessed from the “devil” comment, was always quite religious. And despite the church frowning heavily on things like Ouija boards, her grief was such that she was convinced by a friend to try the board to talk once more to her departed husband, the love of her young life.

She never told me exactly what came from this experiment, but suffice to say that after that, the board was moved to the bottom of the closet, underneath a big cardboard box of other things we couldn’t use but also couldn’t throw away. It was never used again. I suspect some of my father’s things were also tucked away in that box.

While Ouija boards are not as popular as they once were, they’re still around, if you look hard enough for them. Hasbro now markets them, and you can even buy one through Amazon, if the spirit moves you. Amazon helpfully suggests bundling your purchase with a handheld LED ghost detector and the SB7 Spirit Box – also useful for exorcisms and hunting trips into the great beyond.

Various church leaders are still warning us not to use Ouija boards. One religious online publication cautions, “Ouija boards are not innocent toys that can be played at Halloween parties. They can have grave spiritual consequences that can last years, leading a person down the dark path of Satan’s lies.”

Consider yourself duly warned.

Of course, in the 62 years since my father passed away, technology has added a new wrinkle or two to our ability to talk to the dead. We can now do it through AI.

At the Amazon re:MARS conference in 2022, Senior Vice President Rohit Prasad told attendees that they were working on ways to change Alexa’s voice to that of anyone, living or dead. A video showed Alexa reading a bedtime story to a young child in the voice of his grandmother (presumably no longer with us to read the story herself). Prasad said Alexa could collect enough voice data from less than a minute of audio to make this personalization possible. While that may seem weird, or even creepy, to most of us, Prasad was non-plussed: “While AI can’t eliminate that pain of loss, it can definitely make their memories last.”

A recent CNN article talked about other ways the grieving are using AI to stay in touch with their dearly departed. Rather than using a wooden pointer to laboriously spell out answers on a board, an AI avatar based on someone who has passed on can carry on a real time conversation with us. If you train it with the right data, it can answer questions and provide advice. You can even create a video of those no longer here and chat with them. I know if any of these technologies were around 62 years ago, my mom would have probably tried them.

I spent much of my childhood watching my mother deal with her grief, so I certainly wouldn’t want to pass judgement on anyone willing to try anything to help heal the scars of loss, but this seems to be a dangerous path to go down, and not just because you may end up unknowingly chatting with demons.

As Mary-Frances O’Connor, a University of Arizona professor who studies grief, said in the CNN article, “When we fall in love with someone, the brain encodes that person as, ‘I will always be there for you and you will always be there for me.’ When they die, our brain has to understand that this person isn’t coming back.”

In 1969, psychiatrist Elizabeth Kübler-Ross defined the five stages of grief: denial, anger, bargaining, depression and acceptance. While these have been criticized as being overly simplistic and misleading (i.e. – grief is usually not a linear journey going neatly from one stage to the next), it is commonly understood that – at some point – acceptance allows us to move on with our own lives. That might be harder to do if you’re lugging an AI powered Ouija Board with you.

My mom understood; some things are better left at the bottom of a forgotten closet.

You Know What Government Agencies Need? Some AI

A few items on my recent to-do list  have necessitated dealing with multiple levels of governmental bureaucracy: regional, provincial (this being in Canada) and federal. All three experiences were, without exception, a complete pain in the ass. So, having spent a good part of my life advising companies on how to improve their customer experience, the question that kept bubbling up in my brain was, “Why the hell is dealing with government such a horrendous experience?”

Anecdotally, I know everyone I know feels the same way. But what about everyone I don’t know? Do they also feel that the experience of dealing with a government agency is on par with having a root canal or colonoscopy?

According to a survey conducted last year by the research firm Qualtrics XM, the answer appears to be yes. This report paints a pretty grim picture. Satisfaction with government services ranked dead last when compared to private sector industries.

The next question, being that AI is all I seem to have been writing about lately, is this: “Could AI make dealing with the government a little less awful?”

And before you say it, yes, I realize I recently took a swipe at the AI-empowered customer service used by my local telco. But when the bar is set as low as it is for government customer service, I have to believe that even with the limitations of artificially intelligent customer service as it currently exists, it would still be a step forward. At least the word “intelligent” is in there somewhere.

But before I dive into ways to potentially solve the problem, we should spend a little time exploring the root causes of crappy customer service in government.

First of all, government has no competitors. That means there are no market forces driving improvement. If I have to get a building permit or renew my driver’s license, I have one option available. I can’t go down the street and deal with “Government Agency B.”

Secondly, in private enterprise, the maxim is that the customer is always right. This is, of course, bullshit.  The real truth is that profit is always right, but with customers and profitability so inextricably linked, things generally work out pretty well for the customer.

The same is not true when dealing with the government. Their job is to make sure things are (supposedly) fair and equitable for all constituents. And the determination of fairness needs to follow a universally understood protocol. The result of this is that government agencies are relentlessly regulation bound and fixated on policies and process, even if those are hopelessly archaic. Part of this is to make sure that the rules are followed, but let’s face it, the bigger motivator here is to make sure all bureaucratic asses are covered.

Finally, there is a weird hierarchy that exists in government agencies.  Frontline people tend to stay in place even if governments change. But the same is often not true for their senior management. Those tend to shift as governments come and go. According to the Qualtrics study cited earlier, less than half (48%) of government employees feel their leadership is responsive to feedback from employees. About the same number (47%) feel that senior leadership values diverse perspectives.

This creates a workplace where most of the people dealing with clients feel unheard, disempowered and frustrated. This frustration can’t help but seep across the counter separating them from the people they’re trying to help.

I think all these things are givens and are unlikely to change in my lifetime. Still, perhaps AI could be used to help us navigate the serpentine landscape of government rules and regulations.

Let me give you one example from my own experience. I have to move a retaining wall that happens to front on a lake. In Canada, almost all lake foreshores are Crown land, which means you need to deal with the government to access them.

I have now been bouncing back and forth between three provincial ministries for almost two years to try to get a permit to do the work. In that time, I have lost count of how many people I’ve had to deal with. Just last week, someone sent me a couple of user guides that “I should refer to” in order to help push the process forward. One of them is 29 pages long. The other is 42 pages. They are both about as compelling and easy to understand as you would imagine a government document would be. After a quick glance, I figured out that only two of the 71 combined pages are relevant to me.

As I worked my way through them, I thought, “surely some kind of ChatGPT interface would make this easier, digging through the reams of regulation to surface the answers I was looking for. Perhaps it could even guide you through the application process.”

Let me tell you, it takes a lot to make me long for an AI-powered interface. But apparently, dealing with any level of government is enough to push me over the edge.

Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

AI Customer Service: Not Quite Ready For Prime Time

I had a problem with my phone, which is a landline (and yes, I’ve heard all the smartass remarks about being the last person on earth with a landline, but go ahead, take your best shot).

The point is, I had a problem. Actually, the phone had a problem, in that it didn’t work. No tone, no life, no nothing. So that became my problem.

What did I do? I called my provider (from my cell, which I do have) and after going through this bizarre ID verification process that basically stopped just short of a DNA test, I got routed through to their AI voice assistant, who pleasantly asked me to state my problem in one short sentence.

As soon as I heard that voice, which used the same dulcet tones as Siri, Alexa and the rest of the AI Geek Chorus, I knew what I was dealing with. Somewhere at a board table in the not-too-distant past, somebody had come up with the brilliant idea of using AI for customer service. “Do you know how much money we could save by cutting humans out of our support budget?” After pointing to a chart with a big bar and a much smaller bar to drive the point home, there would have been much enthusiastic applause and back-slapping.

Of course, the corporate brain trust had conveniently forgotten that they can’t cut all humans out of the equation, as their customers still fell into that category.  And I was one of them, now dealing face to face with the “Artificially Intelligent” outcome of corporate cost-cutting. I stated my current state of mind more succinctly than the one short sentence I was instructed to use. It was, instead, one short word — four letters long, to be exact. Then I realized I was probably being recorded. I sighed and thought to myself, “Buckle up. Let’s give this a shot.”

I knew before starting that this wasn’t going to work, but I wasn’t given an alternative. So I didn’t spend too much time crafting my sentence. I just blurted something out, hoping to bluff my way to the next level of AI purgatory. As I suspected, Ms. AI was stumped. But rather than admit she was scratching her metaphysical head, she repeated the previous instruction, preceded by a patronizing “pat on my head” recap that sounded very much like it was aimed at someone with the IQ of a soap dish. I responded again with my four-letter reply — repeated twice, just for good measure.

Go ahead, record me. See if I care.

This time I tried a roundabout approach, restating my issue in terms that hopefully could be parsed by the cybernetic sadist that was supposedly trying to help me. Needless to say, I got no further. What I did get was a helpful text with all the service outages in my region. Which I knew wasn’t the problem. But no one asked me.

I also got a text with some troubleshooting tips to try at home. I had an immediate flashback to my childhood, trying to get my parents’ attention while they were entertaining friends at home, “Did you try to figure it out yourself, Gordie? Don’t bother Mommy and Daddy right now. We’re busy doing grown up things. Run along and play.”

At this point, the scientific part of my brain started toying with the idea of making this an experiment. Let’s see how far we can push the boundaries of this bizarre scenario: equally frustrating and entertaining. My AI tormenter asked me, “Do you want to continue to try to troubleshoot this on the phone with me?”

I was tempted, I really was. Probably by the same part of my brain that forces me to smell sour milk or open the lid of that unidentified container of green fuzz that I just found in the back of the fridge.  And if I didn’t have other things to do in my life, I might have done that. But I didn’t. Instead, in desperation I pleaded, “Can I just talk to a human, please?”

Then I held my breath. There was silence. I could almost hear the AI wheels spinning. I began to wonder if some well-meaning programmer had included a subroutine for contrition. Would she start pleading for forgiveness?

After a beat and a half, I heard this, “Before I connect you with an agent, can I ask you for a few more details so they’re better able to help you?” No thanks, Cyber-Sally, just bring on a human, posthaste! I think I actually said something to that effect. I might have been getting a little punchy in my agitated state.

As she switched me to my requested human, I swore I could hear her mumble something in her computer-generated voice. And I’m pretty sure it was an imperative with two words, the first a verb with four letters, the second a subject pronoun with three letters.

And, if I’m right, I may have newfound respect for AI. Let’s just call it my version of the Turing Test.

My Award for the Most Human Movie of the Year

This year I watched the Oscars with a different perspective. For the first time, I managed to watch nine of the 10 best picture nominees (My one exception was “The Zone of Interest”) before Sunday night’s awards. And for each, I asked myself this question, “Could AI have created this movie?” Not AI as it currently stands, but AI in a few years, or perhaps a few decades.

To flip it around, which of the best picture nominees would AI have the hardest time creating? Which movie was most dependent on humans as the creative engine?

AI’s threat to the film industry is on the top of everyone’s mind. It has been mentioned in pretty much every industry awards show. That threat was a major factor in the strikes that shut down Hollywood last year. And it was top of mind for me, as I wrote about it in my post last week.

So Sunday night, I watched as the 10 nominated films were introduced, one by one. And for each, I asked myself, “Is this a uniquely human film?” To determine that, I had to ask myself, “What sets human intelligence apart from artificial intelligence? What elements in the creative process most rely on how our brains work differently from a computer?”

For me, the answer was not what I expected. Using that yardstick, the winner was “Barbie.”

The thing that’s missing in artificial intelligence, for good and bad, is emotion. And from emotion comes instinct and intuition.

Now, all the films had emotion, in spades. I can’t remember a year where so many films driven primarily by character development and story were in the running. But it wasn’t just emotion that set “Barbie” apart; it was the type of emotion.

Some of the contenders, including “Killers of the Flower Moon” and “Oppenheimer,” packed an emotional wallop, but it was a wallop with one note. The emotional arc of these stories was predictable. And things that are predictable lend themselves to algorithmic discovery. AI can learn to simulate one-dimensional emotions like fear, sorrow, or disgust — and perhaps even love.

But AI has a much harder time understanding emotions that are juxtaposed and contradictory. For that, we need the context that comes from lived experience.

AI, for example, has a really tough time understanding irony and sarcasm. As I have written before, sarcasm requires some mental gymnastics that is difficult for AI to replicate.

So, if we’re looking for backwater of human cognition that so far has escaped the tidal wave of AI bearing down on it, we could well find it in satire and sarcasm.

Barbie wasn’t alone in employing satire. “Poor Things” and “American Fiction” also used social satire as the backbone of their respective narratives.

What “Barbie” director Greta Gerwig did, with exceptional brilliance, was bring together a delicately balanced mix of contradictory emotions for a distinctively human experience. Gerwig somehow managed to tap into the social gestalt of a plastic toy to create a joyful, biting, insightful and ridiculous creation that never once felt inauthentic. It lived close to our hearts and was lodged in a corner of our brains that defies algorithmic simulation. The only way to create something like this was to lean into your intuition and commit fully to it. It was that instinct that everyone bought into when they came onboard the project.

Not everyone got “Barbie.” That often happens when you double down on your own intuition. Sometimes it doesn’t work — but sometimes it does. “Barbie” was the highest grossing movie of last year. Based on that endorsement, the movie-going public got something the voters of the Academy didn’t: the very human importance of Gerwig’s achievement. If you watched the Oscars on Sunday night, the best example of that importance was when Ryan Gosling committed completely to his joyous performance of “I’m Just Ken,” which generated the biggest positive response of the entire evening for the audience.

 I can’t imagine an algorithm ever producing a creation like “Barbie.”

Will AI Wipe Out the Canadian Film and TV industry?

If you happen to be Canadian, you know that one of our favourite past times is watching Hollywood movies and TV shows and picking out the Canadian locations that are standing in for American ones.

For example, that one shot from the otherwise amazing episode 3 in The Last of Us, establishing a location supposedly “10 miles west of Boston”? It was actually in Kananaskis, Alberta. Waltham, Mass is 10 miles west of Boston. I’ve been there and I know, there is nary a mountain on the horizon in any direction.

Loudermilk, which is now streaming on Netflix, was a little more subtle with its geographic sleight of hand. There, Vancouver stands in for Seattle. The two cities are quite similar and most people would never notice the substitution, especially when the series uses stock footage of the Seattle skyline for its establishing shots. But if you’re Canadian, you couldn’t miss the Canada Post truck driving through the background of one shot.

Toronto is another popular “generic” Canadian city. It stood in for New York in Suits and the fictional Gilead in The Handmaid’s Tale. Ironically, this brings us full circle, because the location where the handmaid Offred lives is supposedly a post-revolution Boston. And, as we now know, if you go 10 “miles” west, you end up in Kananaskis, Alberta. Don’t wreck the Hollywood magic by reminding us that Toronto and Kananaskis are separated by 1725 actual miles.

The ability of Canadian locations to stand in for American ones is a critical element in our own movie and TV industry. It brings billions of production dollars north of the 49th parallel.

But it’s not just locations, it’s also people. Canadians have long flown “under the radar” as substitutes for Americans. I, and many Canadians, can reel them off from memory over a beer and a Hawaiian pizza (yep, that culinary cockup is Canadian too – sorry about that). The original Captain Kirk? Canadian. Bonanza patriarch Ben Cartwright? Canadian. Perry Mason? Canadian. Hell, even Barbie’s Ken is Canadian – eh?

But AI could be threatening this quintessentially Canadian activity. And that’s just the tip of the proverbial iceberg (another thing often found in Canada).

 If you happened to watch the recent SAG-AFTRA awards, you probably saw president Fran Drescher refer to the threat AI poised to their industry. She warned us that, “AI will entrap us in a matrix where none of us know what’s real. If an inventor lacks empathy and spirituality, perhaps that’s not the invention we need.”

If you’ve looked at OpenAI’s release of Sora, you can understand Drescher’s worry. Type a text prompt in and you instantly get a photorealistic HD video: a nighttime walk through Tokyo, Mastadons in the mountains, pirate ships battling in a cup of coffee. And this is just the beginning.

But like many existential threats, it’s hard to wrap your mind around the scope of this one. So, in attempt to practice what I preach and reduce the psychological distance, I’m going try to bring it home for Canada – while understanding that the threat of AI to our particular neck of the woods is a small fraction of the potential damage that it might do to the industry as a whole.

First of all, let’s understand the basis of the Canadian film and television industry. It’s almost entirely a matter of dollars and cents. The currently industry exists at its present scale because it’s cheaper to make a film or a TV show in Canada. And if a Canadian location can be made to look like a US one, so much the better. We have lots of talent up here, we have sound stages; but most importantly, we have a beneficial exchange rate and tax incentives. A production dollar goes a lot further here. That, and the ability of Canada to easily stand in for the US, are really the reason why Hollywood moved north.

But if it suddenly becomes cheaper to stay in L.A. and use AI to create your location, rather that physically move to a “stand-in” location, our film and TV industry will dry up almost instantly. Other than Quebec, where homegrown productions have a loyal francophone audience, there are relatively few films or TV shows that acknowledge that they are set in Canada. It’s the curse of living next to a potential audience that outnumbers ours 10 to 1, not to mention the ravenous world-wide appetite for entertainment that looks like it’s made in the US.

Even the most successful Canadian sitcom in history, Schitt’s Creek, had a location that was vaguely non-specific. The Roses never said their Rosebud motel wasn’t in Canada, but they also never said it was.  (Here’s a tidbit for TV trivia fans:  the motel used for the exterior shots is in Mono, Ontario, about 50 miles northwest of Toronto)

So, if AI makes it easier and cheaper to CGI a location rather than move your production to Canada, what might that mean for our industry? Let’s put some scope to this. Just before COVID put the brakes on production, film and TV added $12.2 billion to Canada’s GDP and provided work for 244,500 people. I don’t want to minimize the creative efforts of our homegrown producers and directors, but if Hollywood stops coming north, we’ll be lucky to hold on to one-tenth of that economic spin off.

Like I said, the Canadian perspective of the impact AI might have on film and TV is a drop in the bucket. There are so many potential tentacles to this monster that it’s difficult to keep count. But even in the limited scope of this one example, the impact is devastating: over 12 billion dollars and a quarter million jobs. If we zoom out, it becomes enough to boggle the mind.

Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.