My Award for the Most Human Movie of the Year

This year I watched the Oscars with a different perspective. For the first time, I managed to watch nine of the 10 best picture nominees (My one exception was “The Zone of Interest”) before Sunday night’s awards. And for each, I asked myself this question, “Could AI have created this movie?” Not AI as it currently stands, but AI in a few years, or perhaps a few decades.

To flip it around, which of the best picture nominees would AI have the hardest time creating? Which movie was most dependent on humans as the creative engine?

AI’s threat to the film industry is on the top of everyone’s mind. It has been mentioned in pretty much every industry awards show. That threat was a major factor in the strikes that shut down Hollywood last year. And it was top of mind for me, as I wrote about it in my post last week.

So Sunday night, I watched as the 10 nominated films were introduced, one by one. And for each, I asked myself, “Is this a uniquely human film?” To determine that, I had to ask myself, “What sets human intelligence apart from artificial intelligence? What elements in the creative process most rely on how our brains work differently from a computer?”

For me, the answer was not what I expected. Using that yardstick, the winner was “Barbie.”

The thing that’s missing in artificial intelligence, for good and bad, is emotion. And from emotion comes instinct and intuition.

Now, all the films had emotion, in spades. I can’t remember a year where so many films driven primarily by character development and story were in the running. But it wasn’t just emotion that set “Barbie” apart; it was the type of emotion.

Some of the contenders, including “Killers of the Flower Moon” and “Oppenheimer,” packed an emotional wallop, but it was a wallop with one note. The emotional arc of these stories was predictable. And things that are predictable lend themselves to algorithmic discovery. AI can learn to simulate one-dimensional emotions like fear, sorrow, or disgust — and perhaps even love.

But AI has a much harder time understanding emotions that are juxtaposed and contradictory. For that, we need the context that comes from lived experience.

AI, for example, has a really tough time understanding irony and sarcasm. As I have written before, sarcasm requires some mental gymnastics that is difficult for AI to replicate.

So, if we’re looking for backwater of human cognition that so far has escaped the tidal wave of AI bearing down on it, we could well find it in satire and sarcasm.

Barbie wasn’t alone in employing satire. “Poor Things” and “American Fiction” also used social satire as the backbone of their respective narratives.

What “Barbie” director Greta Gerwig did, with exceptional brilliance, was bring together a delicately balanced mix of contradictory emotions for a distinctively human experience. Gerwig somehow managed to tap into the social gestalt of a plastic toy to create a joyful, biting, insightful and ridiculous creation that never once felt inauthentic. It lived close to our hearts and was lodged in a corner of our brains that defies algorithmic simulation. The only way to create something like this was to lean into your intuition and commit fully to it. It was that instinct that everyone bought into when they came onboard the project.

Not everyone got “Barbie.” That often happens when you double down on your own intuition. Sometimes it doesn’t work — but sometimes it does. “Barbie” was the highest grossing movie of last year. Based on that endorsement, the movie-going public got something the voters of the Academy didn’t: the very human importance of Gerwig’s achievement. If you watched the Oscars on Sunday night, the best example of that importance was when Ryan Gosling committed completely to his joyous performance of “I’m Just Ken,” which generated the biggest positive response of the entire evening for the audience.

 I can’t imagine an algorithm ever producing a creation like “Barbie.”

The Messaging of Climate Change

86% of the world believes that climate change is a real thing. That’s the finding of a massive new mega study with hundreds of authors (the paper’s author acknowledgement is a page and a half). 60,000 participants from 63 countries around the world took part. And, as I said, 86% of them believe in climate change.

Frankly, there’s no surprise there. You just have to look out your window to see it. Here in my corner of the world, wildfires wiped out hundreds of homes last summer and just a few weeks ago, a weird winter whiplash took temperatures from unseasonably warm to deep freeze cold literally overnight. This anomaly wiped out this region’s wine industry. The only thing surprising I find about the 86 percent stat is that 14% still don’t believe. That speaks of a determined type of ignorance.

What is interesting about this study is that it was conducted by behavioral scientists. This is an area that has always fascinated me. From the time I read Richard Thaler and Cass Sunstein’s book, Nudge, I have always been interested in behavioral interventions. What are the most effective “nudges” in getting people to shift their behaviors to more socially acceptable directions?

According to this study, that may not be that easy. When I first dove into this study, my intention was to look at how different messages had different impacts depending on the audience: right wing vs left wing for instance. But in going through the results, what struck me the most was just how poorly all the suggested interventions performed. It didn’t matter if you were liberal or conservative or lived in Italy or Iceland. More often than not, all the messaging fell on deaf ears.

What the study did find is that how you craft your campaign about climate change depends on what you want people to do. Do you want to shift non-believers in Climate Change towards being believers? Then decrease the psychological distance. More simply put, bring the dangers of climate change to their front doorstep. If you live next to a lot of trees, talk about wildfires. If you live on the coast, talk about flooding. If you live in a rural area, talk about the impacts of drought. But it should be noted that we weren’t talking a massive shift here – with an “absolute effect size of 2.3%”. It was the winner by the sheer virtue of sucking the least.

If you want to build support for legislation that mitigates climate change, the best intervention was to encourage people to write a letter to a child that’s close to you, with the intention that they read it in the future. This forces the writer to put some psychological skin in the game.  

Who could write a future letter to someone you care about without making some kind of pledge to make sure there’s still a world they can live in? And once you do that, you feel obligated to follow through. Once again, this had a minimal impact on behaviors, with an overall effect size of 2.6%.

A year and a half ago, I talked about Climate Change messaging, debating Mediapost Editor-in-Chief Joe Mandese about whether a doom and gloom approach would move the needle on behaviors. In a commentary from the summer of 2022, Mandese wrapped up by saying, “What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

In a follow up, I worried that doom and gloom might backfire on us, “Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.”

So, what does this study say?

The answer, again, is, “it depends.” If we’re talking about getting people to share posts on social media, then Doom and Gloom is the way to go. Of all the various messaging options, this had the biggest impact on sharing, by a notable margin.

This isn’t really surprising. A number of studies have shown that negative news is more likely to be shared on social media than positive news.

But what if we’re asking people to make a change that requires some effort beyond clicking the “share” button? What if they actually have to do something? Then, as I suspected, Doom and Gloom messaging had the opposite effect, decreasing the likelihood that people would make a behavioral change to address climate change (the study used a tree planting initiative as an example). In fact, when asking participants to actually change their behavior in an effortful way, all the tested climate interventions either had no effect or, worse, they “depress(ed) and demoralize(d) the public into inaction”.

That’s not good news. It seems that no matter what the message is, or who the messenger is, we’re likely to shoot them if they’re asking us to do anything beyond bury our head in the sand.

What’s even worse, we may be losing ground. A study from 10 years ago by Yale University had more encouraging results. They showed that effective climate change messaging, was able to shift public perceptions by up to 19 percent. While not nearly as detailed as this study, the results seem to indicate a backslide in the effectiveness of climate messaging.

One of the commentators that covered the new worldwide study perhaps summed it up best by saying, “if we’re dealing with what is probably the biggest crisis ever in the history of humanity, it would help if we actually could talk about it.”

Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.

Fooling Some of the Systems Some of the Time

If there’s a system, there’s a way to game it. Especially when those systems are tied to someone making money.

Buying a Best Seller

Take publishing, for instance. New books that say they are on the New York Times Best-Seller List sell more copies than ones that don’t make the list. A 2004 study by University of Wisconsin economics professor Alan Sorenson found the bump is about 57%. That’s; certainly motivation for a publisher to game the system.

There’s also another motivating factor. According to a Times op-ed, Michael Korda, former editor in chief of Simon and Schuster, said that an author’s contract can include a bonus of up to $100,000 for hitting No. 1 on the list.

This amplifying effect is not a one-shot deal. Make the list for just one week, in any slot under any category, and you can forever call yourself a “NY Times bestselling author,” reaping the additional sales that that honor brings with it. Given the potential rewards, you can guarantee that someone is going to be gaming the system.

And how do you do that? Typically, by doing a bulk purchase through an outlet that feeds its sales numbers to TheTimes. That’s what Donald Trump Jr. and his publisher did for   his book “Triggered,” which hit No. 1 on its release in November of 2019, according to various reports.  Just before the release, the Republican National Committee reportedly placed a $94,800 order with a bookseller, which would equate to about 4,000 books, enough to ensure that “Triggered” would end up on the Times list. (Note: The Times does flag these suspicious entries with a dagger symbol when it believes that someone may be potentially gaming the system by buying in bulk.)

But it’s not only book sales where you’ll find a system primed for rigging. Even those supposedly objective 5-star buyer ratings you find everywhere have also been gamed.

5-Star Scams

A 2021 McKinsey report said that, depending on the category, a small bump in a star rating on Amazon can translate into a 30% to 200% boost in sales. Given that potential windfall, it’s no surprise that you’ll find fake review scams proliferate on the gargantuan retail platform.

A recent Wired exposé on these fake reviews found a network that had achieved a level of sophistication that was sobering. It included active recruitment of human reviewers (called “Jennies” — if you haven’t been recruited yet, you’re a “Virgin Jenny”) willing to write a fake review for a small payment or free products. These recruitment networks include recruiting agents in locations including Pakistan, Bangladesh and India working for sellers from China.

But the fake review ecosystem also included reviews cranked out by AI-powered automated agents. As AI improves, these types of reviews will be harder to spot and weed out of the system.

Some recent studies have found that, depending on the category, over one-third of the reviews you see on Amazon are fake. Books, baby products and large appliance categories are the worst offenders.

Berating Ratings…

Back in 2014, Itamar Simonson and Emanuel Rosen wrote a book called “Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information.” Spoiler alert: they posited that consumer reviews and other sources of objective information were replacing traditional marketing and branding in terms of what influenced buyers.

They were right. The stats I cited above show how powerful these supposedly objective factors can be in driving sales. But unfortunately, thanks to the inevitable attempts to game these systems, the information they provide can often be far from perfect.

Greetings from the Great, White (Frozen) North

This post comes to you from Edmonton, Alberta, where the outside temperature right now is minus forty degrees Celsius. If you’re wondering what that is in Fahrenheit, the answer is, “It doesn’t matter.” Minus forty is where the two scales match up.

If you add a bit of a breeze to that, you get a windchill factor that makes it feel like minus fifty Celsius (-58° F). The weather lady on the morning news just informed me that at that temperature, exposed flesh freezes in two to five minutes. Yesterday, an emergency alert flashed on my phone warning us that Alberta’s power grid was overloaded and could collapse under the demand, causing rotating power outages.

I don’t know about you, but I don’t think anyone should live in a place where winter can kill you. Nothing works as it should when it gets this cold, humans included. And yet, Albertans are toughing it out. I noticed that when it gets this cold, the standard niceties that people say change. Instead of telling me to “have a nice day,” everyone has been encouraging me to “stay warm.”

There’s a weird sort of bonding that happens when the weather becomes the common enemy. Maybe we all become brothers and sisters in arms, struggling to survive against the elements. It got me to wondering: Is there a different sense of community in places where it’s really cold in the winter?

When I asked Google which countries had the strongest social ties, it gave me a list of nine: Finland, Norway, Canada, Denmark, Switzerland, Australia, Netherlands, Iceland and Italy. Seven of those places have snowy, cold winters. If you look at countries that have strong social democracies — governments established around the ideal of the common good — again, you’ll find that most of them are well north (or south, in the case of New Zealand) of the equator.

But let’s leave politics aside. Maybe it’s just the act of constantly transitioning from extreme cold to warm and cozy places where there’s a friendly face sincerely wishing you’ll “stay warm” that builds stronger social bonds. As I mentioned in a previous post, the Danes even have a name for it: hygge. It translates loosely to “coziness.”

There are definitely physical benefits to going from being really cold to being really warm. The Finns discovered this secret thousands of years ago when they created the sauna. The whole idea is to repeatedly go from a little hut where the temperature hovers around 80-90° C (176-194° F) to then jump through a hole you’ve cut in the ice into waters barely above freezing. A paper from the Mayo Clinic lists the health benefits of saunas in a rather lengthy paragraph, touching on everything from reducing inflammation to clearer skin to fighting the flu. 

But the benefits aren’t just physical. Estonia, which is just south of Finland, also has a strong sauna culture. A brilliant documentary by Anna Hints, “Smoke Sauna Sisterhood,” shows that the sauna can be a sacred space. As Estonia’s official submission to the Oscars, it’s in contention for a nomination.

Hints’ documentary shows that saunas can touch us on a deeply spiritual level, healing scars that can build up through our lives. There is something in the cycle of heat and cold that taps into inner truths. As Hints said in a recent interview, “With time, deeper, deeper layers of physical dirt start to come up to the surface, but also emotional dirt starts to come up to the surface.”

While I didn’t visit any saunas on my Edmonton trip, every time I ventured outside it was a hot-cold adventure. Everyone turns the thermostat up a little when it gets this cold, so you’re constantly going through doors where the temperature can swing 75 degrees (Celsius, 130 degrees Fahrenheit) in an instant. I don’t know if there’s a health benefit, but I can tell you it feels pretty damned good to get that warm welcome when you’re freezing your butt off.

Stay warm!

A Look Back at 2023 from the Inside.

(Note: This refers to the regular feature on Mediapost – The Media Insider – which I write for every Tuesday)

It seems that every two years, I look back at what the Media Insiders were musing about over the past year. The ironic part is that I’m not an Insider. I haven’t been “inside” the Media industry for over a decade. Maybe that affords me just enough distance to be what I hope could be called an “informed observer.”

I first did this in 2019, and then again in 2021. This year, I decided to grab a back of an envelope (literally) and redo this far from scientific poll. Categorization of themes is always a challenge when I do this, but there are definitely some themes that have been consistent across the past 5 years.  I have tremendous respect for my fellow Insiders and I always find it enlightening to learn what was on their minds.

In 2019, the top three things we were thinking about were (in order): disruptions in the advertising business, how technology is changing us and how politics changed social media.

In 2021, the top three topics included (again) how technology was changing us, general marketing advice and the toxic impact of social media.

So, what about 2023? What were we writing about? After eliminating the columns that were reruns, I ended up with 230 posts in the past year.

It probably comes as a surprise to no one that artificial intelligence was the number one topic by a substantial margin. Almost 15% of all our Insider posts talked about the rise of AI and its impact on – well – pretty much everything!

The number two topic – at 12% – was TV, video and movies. Most of the posts touched on how this industry is going through ongoing disruption in every aspect – creation, distribution, buying and measurement.

Coming in at number three, at just under 12%, was social media. Like in the previous years, most of the posts were about the toxic nature of social media, but there was a smattering of positive case studies about how social platforms were used for positive change.

We Insiders have always been an existential bunch and last year was no different. Our number four topic was about our struggling to stay human in a world increasingly dominated by tech. This accounted for almost 11% of all our posts.

The next two most popular topics were both firmly grounded in the marketing industry itself. Posts about how to be a better marketer generated almost 9% of Insider content for 2023 and various articles about the business of tech marketing added another 8% of posts.

Continuing down the list, we have world events and politics (Dave Morgan’s columns about the Ukraine were a notable addition to this topic), examples of marketing gone wrong and the art and science of brand building.

We also looked at the phenomenon of fame and celebrity, sustainability, and the state of the News industry. In what might have been a wistful look back at what we remember as simpler times, there were even a few columns about retro-media, including the resurgence of the LP.

Interestingly, former hot topics like performance measurement, data and search all clustered near the bottom of the list in terms of number of posts covering these topics.

With 2023 in our rear view mirror, what are the takeaways? What can we glean from the collected year-long works of these very savvy and somewhat battle-weary veterans of marketing?

Well, the word “straddle” comes to mind. We all seem to have one foot still planted in the world and industry we thought we knew and one tentatively dipping its toes into the murky waters of what might come. You can tell that the Media Insiders are no less passionate about the various forms of media we write about, but we do go forward with the caution that comes from having been there and done that.

I think that, in total, I found a potentially worrying duality in this review of our writing. Give or take a few years –  all my fellow Insiders are of the same generation. But we are not your typical Gen-Xers/Baby Boomers (or, in my case, caught in the middle as a member of Generation Jones). We have worked with technology all our lives. We get it. The difference is, we have also accumulated several decades of life wisdom. We are past the point where we’re mesmerized by bright shiny objects. I think this gives us a unique perspective. And, based on what I read, we’re more than a little worried about what future might bring.

Take that for what it’s worth.

A Column About Nothing

What do I have to say in my last post for 2023? Nothing.

Last week, I talked about the cost of building a brand. Then, this week, I (perhaps being the last person on earth to do so) heard about Nothing.  No – not small “n” nothing as in the absence of anything – Big “N” Nothing as in the London based tech start-up headed by Chinese born entrepreneur Carl Pei.

Nothing, according to their website, crafts “intuitive, flawlessly connected products that improve our lives without getting in the way. No confusing tech-speak. No silly product names. Just artistry, passion and trust. And products we’re proud to share with our friends and family. Simple.”

Now, just like the football talents of David Beckham I explored in my last post, the tech Nothing produces is good – very good – but not uniquely good. The Nothing phone (1) and the just released Nothing Phone (2) are capable mid-range smart phones. Again, from the Nothing website, you are asked to “imagine a world where all your devices are seamlessly connected.”

It may just be me, but isn’t that what Apple has been promising (and occasionally delivering) for the better part of the last quarter century? Doesn’t Google make the same basic promise? Personally, I see nothing earth shaking in Nothing’s mission. It all feels very “been there, done that.” Or, if you’ll allow me – it all seems like much ado about Nothing (sorry). Yet people have paid thousands over the asking price when the 100 units of the first Nothing phone were put up for auction prior to its public launch.

Why?  Because of the value of the Nothing brand. And that value comes from one place. No, not the tech. The community. Pei may be a pretty good building of phones, but he’s an even better building of community. He has expertly built a fan base who love to rave about Nothing. On the “Community” section of the Nothing Website, you’re invited to “abandon the glorification of I and open up to the potential of We.”  I’m not sure exactly what that means, but it all sounds very cool and idealistic, if a little vague.

Another genius move by Pei was to open up to the potential of Nothing. In what is probably a latent (or perhaps not so latent) backlash against over advertising and in-your-face branding, we were eager to jump on the Nothing bandwagon. It seems like anti-branding, but it’s not. It’s actually expertly crafted, by-the-book branding. Just like Seinfeld, a show about nothing that became one of the most popular tv shows in history, it has been shown that there is some serious branding swagger to the concept of nothing. I can’t believe no one thought to stake a claim to this branding goldmine before now.

OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.

When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?