Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

AI Customer Service: Not Quite Ready For Prime Time

I had a problem with my phone, which is a landline (and yes, I’ve heard all the smartass remarks about being the last person on earth with a landline, but go ahead, take your best shot).

The point is, I had a problem. Actually, the phone had a problem, in that it didn’t work. No tone, no life, no nothing. So that became my problem.

What did I do? I called my provider (from my cell, which I do have) and after going through this bizarre ID verification process that basically stopped just short of a DNA test, I got routed through to their AI voice assistant, who pleasantly asked me to state my problem in one short sentence.

As soon as I heard that voice, which used the same dulcet tones as Siri, Alexa and the rest of the AI Geek Chorus, I knew what I was dealing with. Somewhere at a board table in the not-too-distant past, somebody had come up with the brilliant idea of using AI for customer service. “Do you know how much money we could save by cutting humans out of our support budget?” After pointing to a chart with a big bar and a much smaller bar to drive the point home, there would have been much enthusiastic applause and back-slapping.

Of course, the corporate brain trust had conveniently forgotten that they can’t cut all humans out of the equation, as their customers still fell into that category.  And I was one of them, now dealing face to face with the “Artificially Intelligent” outcome of corporate cost-cutting. I stated my current state of mind more succinctly than the one short sentence I was instructed to use. It was, instead, one short word — four letters long, to be exact. Then I realized I was probably being recorded. I sighed and thought to myself, “Buckle up. Let’s give this a shot.”

I knew before starting that this wasn’t going to work, but I wasn’t given an alternative. So I didn’t spend too much time crafting my sentence. I just blurted something out, hoping to bluff my way to the next level of AI purgatory. As I suspected, Ms. AI was stumped. But rather than admit she was scratching her metaphysical head, she repeated the previous instruction, preceded by a patronizing “pat on my head” recap that sounded very much like it was aimed at someone with the IQ of a soap dish. I responded again with my four-letter reply — repeated twice, just for good measure.

Go ahead, record me. See if I care.

This time I tried a roundabout approach, restating my issue in terms that hopefully could be parsed by the cybernetic sadist that was supposedly trying to help me. Needless to say, I got no further. What I did get was a helpful text with all the service outages in my region. Which I knew wasn’t the problem. But no one asked me.

I also got a text with some troubleshooting tips to try at home. I had an immediate flashback to my childhood, trying to get my parents’ attention while they were entertaining friends at home, “Did you try to figure it out yourself, Gordie? Don’t bother Mommy and Daddy right now. We’re busy doing grown up things. Run along and play.”

At this point, the scientific part of my brain started toying with the idea of making this an experiment. Let’s see how far we can push the boundaries of this bizarre scenario: equally frustrating and entertaining. My AI tormenter asked me, “Do you want to continue to try to troubleshoot this on the phone with me?”

I was tempted, I really was. Probably by the same part of my brain that forces me to smell sour milk or open the lid of that unidentified container of green fuzz that I just found in the back of the fridge.  And if I didn’t have other things to do in my life, I might have done that. But I didn’t. Instead, in desperation I pleaded, “Can I just talk to a human, please?”

Then I held my breath. There was silence. I could almost hear the AI wheels spinning. I began to wonder if some well-meaning programmer had included a subroutine for contrition. Would she start pleading for forgiveness?

After a beat and a half, I heard this, “Before I connect you with an agent, can I ask you for a few more details so they’re better able to help you?” No thanks, Cyber-Sally, just bring on a human, posthaste! I think I actually said something to that effect. I might have been getting a little punchy in my agitated state.

As she switched me to my requested human, I swore I could hear her mumble something in her computer-generated voice. And I’m pretty sure it was an imperative with two words, the first a verb with four letters, the second a subject pronoun with three letters.

And, if I’m right, I may have newfound respect for AI. Let’s just call it my version of the Turing Test.

We SHOULD Know Better — But We Don’t

“The human mind is both brilliant and pathetic.  Humans have built hugely complex societies and technologies, but most of us don’t even know how a toilet works.”

– from The Knowledge Illusion: Why We Never Think Alone” by Steven Sloman and Philip Fernback.

Most of us think we know more than we do — especially about things we really know nothing about. This phenomenon is called the Dunning-Kruger Effect. Named after psychologists Justin Kruger and David Dunning, this bias causes us to overestimate our ability to do things that we’re not very good at.

That’s the basis of the new book “The Knowledge Illusion: Why We Never Think Alone.” The basic premise is this: We all think we know more than we actually do. Individually, we are all “error prone, sometimes irrational and often ignorant.” But put a bunch of us together and we can do great things. We were built to operate in groups. We are, by nature, herding animals.

This basic human nature was in the back of mind when I was listening to an interview with Es Devlin on CBC Radio. Devlin is self-described as an artist and stage designer.  She was the vision behind Beyonce’s Renaissance Tour, U2’s current run at The Sphere in Las Vegas, and the 2022 Superbowl halftime show with Dr. Dre, Snoop Dogg, Eminem and Mary J. Blige.

When it comes to designing a visually spectacular experience,  Devlin has every right to be a little cocky. But even she admits that every good idea doesn’t come directly from her. She said the following in the interview (it’s profound, so I’m quoting it at length):

“I learned quite quickly in my practice to not block other people’s ideas — to learn that, actually,  other people’s ideas are more interesting than my own, and that I will expand by absorbing someone else’s idea.

“The real test is when someone proposes something in a collaboration that you absolutely, [in] every atom of your body. revile against. They say, ‘Why don’t we do it in bubblegum pink?’ and it was the opposite of what you had in mind. It was the absolute opposite of anything you would dream of doing.

“But instead of saying, ‘Oh, we’re not doing that,’  you say ‘OK,’ and you try to imagine it. And then normally what will happen is that you can go through the veil of the pink bubblegum suggestion, and you will come out with a new thing that you would never have thought of on your own.

“Why? Because your own little batch of poems, your own little backpack of experience. does not converge with that other person, so you are properly meeting not just another human being, but everything that led up to them being in that room with you. “

From Interview with Tom Powers on Q – CBC Radio, March 18, 2024

We live in a culture that puts the individual on a pedestal.  When it comes to individualistic societies, none are more so than the United States (according to a study by Hofstede Insights).  Protection of personal rights and freedom are the cornerstone of our society (I am Canadian, but we’re not far behind on this world ranking of individualistic societies). The same is true in the U.K. (where Devlin is from), Australia, the Netherlands and New Zealand.

There are good things that come with this, but unfortunately it also sets us up as the perfect targets for the Dunning-Kruger effect. This individualism and the cognitive bias that comes with it are reinforced by social media. We all feel we have the right to be heard — and now we have the platforms that enable it.

With each post, our unshakable belief in our own genius and infallibility is bulwarked by a chorus of likes from a sycophantic choir who are jamming their fingers down on the like button. Where we should be cynical of our own intelligence and knowledge, especially about things we know nothing about, we are instead lulled into hiding behind dangerous ignorance.

What Devlin has to say is important. We need to be mindful of our own limitations and be willing to ride on the shoulders of others so we can see, know and do more. We need to peek into the backpack of others to see what they might have gathered on their own journey.

(Feature Image – Creative Commons – https://www.flickr.com/photos/tedconference/46725246075/)

Post-mortem of a Donald Trump Sound Bite

This past weekend, Donald Trump was campaigning in Dayton, Ohio. This should come as news to no one. You’ve all probably seen various blips come across your social media radar. And, as often happens, what Trump said has been picked up in the mainstream press.

Now, I am quite probably the last person in the world that would ever come to Donald Trump’s defense. But I did want to take this one example of how it’s the media, including social media, that is responsible for the distortion of reality that we often see happen.

My first impression of what happened is that Trump promised a retributive bloodbath for any and all opposition if he’s not elected president. And, like many of you, that first impression came through my social media feeds. Joe Biden’s X (formerly Twitter) post said “It’s clear this guy wants another January 6th” Republican Lawyer and founding member of the Lincoln Project George Conway also posted: “This is utterly unhinged.”  

There was also retweeting of ABC coverage featuring a soundbite from Trump that said, “There would be a bloodbath if he is not re-elected in November.” This was conflated with Trump’s decision to open the stump speech with a recording of “Justice for All” by the J6 Choir, made of inmates awaiting trial for their roles in the infamous insurrection after the last election. Trump saluted during the playing of the recording.

To be crystal clear, I don’t condone any of that. But that’s not the point. I’m not the audience this was aimed at.

First of all, Donald Trump was campaigning. In this case, he was making a speech aimed at his base in Ohio, many of whom are auto-workers. And the “bloodbath” comment had nothing to do with armed insurrection. It was Trump’s prediction of what would happen if he wasn’t elected and couldn’t protect American auto jobs from the possibility of a trade war with China over auto manufacturing.

But you would be hard pressed to know that based on what you saw, heard or read on either social media or traditional media.

You can say a lot of derogatory things about Donald Trump, but you can’t say he doesn’t know his base or what they want to hear. He’s on the campaign trail to be elected President of the United State. The way that game is played, thanks to a toxic ecosystem created by the media, is to pick your audience and tell them exactly what they want to hear. The more you can get that message amplified through both social and mainstream media, the better. And if you can get your opposition to help you by also spreading the message, you get bonus points.

Trump is an expert at playing that game. He is the personification of the axiom, “There is no such thing as bad press.”

If we try to pin this down to the point where we can assign blame, it becomes almost impossible. There was nothing untrue in the coverage of the Dayton Rally. It was just misleading due to incomplete information, conflation, and the highlighting of quotes without context. It was sloppy reporting, but it wasn’t illegal.

The rot here isn’t acute. It isn’t isolated to one instance. It’s chronic and systemic. It runs through the entire media ecosystem. It benefits from round after round of layoffs that have dismantled journalism and gutted the platform’s own fact checking and anti-misinformation teams. Republicans, led by House Judiciary Chairman Jim Jordan, are doubling down on this by investigating alleged anti-conservative censorship by the platforms.

I’m pretty sure things won’t get better. Social media feeds are – if anything – more littered than ever with faulty information and weaponized posts designed solely to provoke. So far, management of the platforms have managed to slither away from anything resembling responsibility. And the campaigns haven’t even started to heat up. In the 230 days between now and November 5th, the stakes will get higher and posts will become more inflammatory.

Buckle up. It promises to be a bumpy (or Trumpy?) ride!

My Award for the Most Human Movie of the Year

This year I watched the Oscars with a different perspective. For the first time, I managed to watch nine of the 10 best picture nominees (My one exception was “The Zone of Interest”) before Sunday night’s awards. And for each, I asked myself this question, “Could AI have created this movie?” Not AI as it currently stands, but AI in a few years, or perhaps a few decades.

To flip it around, which of the best picture nominees would AI have the hardest time creating? Which movie was most dependent on humans as the creative engine?

AI’s threat to the film industry is on the top of everyone’s mind. It has been mentioned in pretty much every industry awards show. That threat was a major factor in the strikes that shut down Hollywood last year. And it was top of mind for me, as I wrote about it in my post last week.

So Sunday night, I watched as the 10 nominated films were introduced, one by one. And for each, I asked myself, “Is this a uniquely human film?” To determine that, I had to ask myself, “What sets human intelligence apart from artificial intelligence? What elements in the creative process most rely on how our brains work differently from a computer?”

For me, the answer was not what I expected. Using that yardstick, the winner was “Barbie.”

The thing that’s missing in artificial intelligence, for good and bad, is emotion. And from emotion comes instinct and intuition.

Now, all the films had emotion, in spades. I can’t remember a year where so many films driven primarily by character development and story were in the running. But it wasn’t just emotion that set “Barbie” apart; it was the type of emotion.

Some of the contenders, including “Killers of the Flower Moon” and “Oppenheimer,” packed an emotional wallop, but it was a wallop with one note. The emotional arc of these stories was predictable. And things that are predictable lend themselves to algorithmic discovery. AI can learn to simulate one-dimensional emotions like fear, sorrow, or disgust — and perhaps even love.

But AI has a much harder time understanding emotions that are juxtaposed and contradictory. For that, we need the context that comes from lived experience.

AI, for example, has a really tough time understanding irony and sarcasm. As I have written before, sarcasm requires some mental gymnastics that is difficult for AI to replicate.

So, if we’re looking for backwater of human cognition that so far has escaped the tidal wave of AI bearing down on it, we could well find it in satire and sarcasm.

Barbie wasn’t alone in employing satire. “Poor Things” and “American Fiction” also used social satire as the backbone of their respective narratives.

What “Barbie” director Greta Gerwig did, with exceptional brilliance, was bring together a delicately balanced mix of contradictory emotions for a distinctively human experience. Gerwig somehow managed to tap into the social gestalt of a plastic toy to create a joyful, biting, insightful and ridiculous creation that never once felt inauthentic. It lived close to our hearts and was lodged in a corner of our brains that defies algorithmic simulation. The only way to create something like this was to lean into your intuition and commit fully to it. It was that instinct that everyone bought into when they came onboard the project.

Not everyone got “Barbie.” That often happens when you double down on your own intuition. Sometimes it doesn’t work — but sometimes it does. “Barbie” was the highest grossing movie of last year. Based on that endorsement, the movie-going public got something the voters of the Academy didn’t: the very human importance of Gerwig’s achievement. If you watched the Oscars on Sunday night, the best example of that importance was when Ryan Gosling committed completely to his joyous performance of “I’m Just Ken,” which generated the biggest positive response of the entire evening for the audience.

 I can’t imagine an algorithm ever producing a creation like “Barbie.”

The Messaging of Climate Change

86% of the world believes that climate change is a real thing. That’s the finding of a massive new mega study with hundreds of authors (the paper’s author acknowledgement is a page and a half). 60,000 participants from 63 countries around the world took part. And, as I said, 86% of them believe in climate change.

Frankly, there’s no surprise there. You just have to look out your window to see it. Here in my corner of the world, wildfires wiped out hundreds of homes last summer and just a few weeks ago, a weird winter whiplash took temperatures from unseasonably warm to deep freeze cold literally overnight. This anomaly wiped out this region’s wine industry. The only thing surprising I find about the 86 percent stat is that 14% still don’t believe. That speaks of a determined type of ignorance.

What is interesting about this study is that it was conducted by behavioral scientists. This is an area that has always fascinated me. From the time I read Richard Thaler and Cass Sunstein’s book, Nudge, I have always been interested in behavioral interventions. What are the most effective “nudges” in getting people to shift their behaviors to more socially acceptable directions?

According to this study, that may not be that easy. When I first dove into this study, my intention was to look at how different messages had different impacts depending on the audience: right wing vs left wing for instance. But in going through the results, what struck me the most was just how poorly all the suggested interventions performed. It didn’t matter if you were liberal or conservative or lived in Italy or Iceland. More often than not, all the messaging fell on deaf ears.

What the study did find is that how you craft your campaign about climate change depends on what you want people to do. Do you want to shift non-believers in Climate Change towards being believers? Then decrease the psychological distance. More simply put, bring the dangers of climate change to their front doorstep. If you live next to a lot of trees, talk about wildfires. If you live on the coast, talk about flooding. If you live in a rural area, talk about the impacts of drought. But it should be noted that we weren’t talking a massive shift here – with an “absolute effect size of 2.3%”. It was the winner by the sheer virtue of sucking the least.

If you want to build support for legislation that mitigates climate change, the best intervention was to encourage people to write a letter to a child that’s close to you, with the intention that they read it in the future. This forces the writer to put some psychological skin in the game.  

Who could write a future letter to someone you care about without making some kind of pledge to make sure there’s still a world they can live in? And once you do that, you feel obligated to follow through. Once again, this had a minimal impact on behaviors, with an overall effect size of 2.6%.

A year and a half ago, I talked about Climate Change messaging, debating Mediapost Editor-in-Chief Joe Mandese about whether a doom and gloom approach would move the needle on behaviors. In a commentary from the summer of 2022, Mandese wrapped up by saying, “What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

In a follow up, I worried that doom and gloom might backfire on us, “Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.”

So, what does this study say?

The answer, again, is, “it depends.” If we’re talking about getting people to share posts on social media, then Doom and Gloom is the way to go. Of all the various messaging options, this had the biggest impact on sharing, by a notable margin.

This isn’t really surprising. A number of studies have shown that negative news is more likely to be shared on social media than positive news.

But what if we’re asking people to make a change that requires some effort beyond clicking the “share” button? What if they actually have to do something? Then, as I suspected, Doom and Gloom messaging had the opposite effect, decreasing the likelihood that people would make a behavioral change to address climate change (the study used a tree planting initiative as an example). In fact, when asking participants to actually change their behavior in an effortful way, all the tested climate interventions either had no effect or, worse, they “depress(ed) and demoralize(d) the public into inaction”.

That’s not good news. It seems that no matter what the message is, or who the messenger is, we’re likely to shoot them if they’re asking us to do anything beyond bury our head in the sand.

What’s even worse, we may be losing ground. A study from 10 years ago by Yale University had more encouraging results. They showed that effective climate change messaging, was able to shift public perceptions by up to 19 percent. While not nearly as detailed as this study, the results seem to indicate a backslide in the effectiveness of climate messaging.

One of the commentators that covered the new worldwide study perhaps summed it up best by saying, “if we’re dealing with what is probably the biggest crisis ever in the history of humanity, it would help if we actually could talk about it.”

Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.

Fooling Some of the Systems Some of the Time

If there’s a system, there’s a way to game it. Especially when those systems are tied to someone making money.

Buying a Best Seller

Take publishing, for instance. New books that say they are on the New York Times Best-Seller List sell more copies than ones that don’t make the list. A 2004 study by University of Wisconsin economics professor Alan Sorenson found the bump is about 57%. That’s; certainly motivation for a publisher to game the system.

There’s also another motivating factor. According to a Times op-ed, Michael Korda, former editor in chief of Simon and Schuster, said that an author’s contract can include a bonus of up to $100,000 for hitting No. 1 on the list.

This amplifying effect is not a one-shot deal. Make the list for just one week, in any slot under any category, and you can forever call yourself a “NY Times bestselling author,” reaping the additional sales that that honor brings with it. Given the potential rewards, you can guarantee that someone is going to be gaming the system.

And how do you do that? Typically, by doing a bulk purchase through an outlet that feeds its sales numbers to TheTimes. That’s what Donald Trump Jr. and his publisher did for   his book “Triggered,” which hit No. 1 on its release in November of 2019, according to various reports.  Just before the release, the Republican National Committee reportedly placed a $94,800 order with a bookseller, which would equate to about 4,000 books, enough to ensure that “Triggered” would end up on the Times list. (Note: The Times does flag these suspicious entries with a dagger symbol when it believes that someone may be potentially gaming the system by buying in bulk.)

But it’s not only book sales where you’ll find a system primed for rigging. Even those supposedly objective 5-star buyer ratings you find everywhere have also been gamed.

5-Star Scams

A 2021 McKinsey report said that, depending on the category, a small bump in a star rating on Amazon can translate into a 30% to 200% boost in sales. Given that potential windfall, it’s no surprise that you’ll find fake review scams proliferate on the gargantuan retail platform.

A recent Wired exposé on these fake reviews found a network that had achieved a level of sophistication that was sobering. It included active recruitment of human reviewers (called “Jennies” — if you haven’t been recruited yet, you’re a “Virgin Jenny”) willing to write a fake review for a small payment or free products. These recruitment networks include recruiting agents in locations including Pakistan, Bangladesh and India working for sellers from China.

But the fake review ecosystem also included reviews cranked out by AI-powered automated agents. As AI improves, these types of reviews will be harder to spot and weed out of the system.

Some recent studies have found that, depending on the category, over one-third of the reviews you see on Amazon are fake. Books, baby products and large appliance categories are the worst offenders.

Berating Ratings…

Back in 2014, Itamar Simonson and Emanuel Rosen wrote a book called “Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information.” Spoiler alert: they posited that consumer reviews and other sources of objective information were replacing traditional marketing and branding in terms of what influenced buyers.

They were right. The stats I cited above show how powerful these supposedly objective factors can be in driving sales. But unfortunately, thanks to the inevitable attempts to game these systems, the information they provide can often be far from perfect.

Greetings from the Great, White (Frozen) North

This post comes to you from Edmonton, Alberta, where the outside temperature right now is minus forty degrees Celsius. If you’re wondering what that is in Fahrenheit, the answer is, “It doesn’t matter.” Minus forty is where the two scales match up.

If you add a bit of a breeze to that, you get a windchill factor that makes it feel like minus fifty Celsius (-58° F). The weather lady on the morning news just informed me that at that temperature, exposed flesh freezes in two to five minutes. Yesterday, an emergency alert flashed on my phone warning us that Alberta’s power grid was overloaded and could collapse under the demand, causing rotating power outages.

I don’t know about you, but I don’t think anyone should live in a place where winter can kill you. Nothing works as it should when it gets this cold, humans included. And yet, Albertans are toughing it out. I noticed that when it gets this cold, the standard niceties that people say change. Instead of telling me to “have a nice day,” everyone has been encouraging me to “stay warm.”

There’s a weird sort of bonding that happens when the weather becomes the common enemy. Maybe we all become brothers and sisters in arms, struggling to survive against the elements. It got me to wondering: Is there a different sense of community in places where it’s really cold in the winter?

When I asked Google which countries had the strongest social ties, it gave me a list of nine: Finland, Norway, Canada, Denmark, Switzerland, Australia, Netherlands, Iceland and Italy. Seven of those places have snowy, cold winters. If you look at countries that have strong social democracies — governments established around the ideal of the common good — again, you’ll find that most of them are well north (or south, in the case of New Zealand) of the equator.

But let’s leave politics aside. Maybe it’s just the act of constantly transitioning from extreme cold to warm and cozy places where there’s a friendly face sincerely wishing you’ll “stay warm” that builds stronger social bonds. As I mentioned in a previous post, the Danes even have a name for it: hygge. It translates loosely to “coziness.”

There are definitely physical benefits to going from being really cold to being really warm. The Finns discovered this secret thousands of years ago when they created the sauna. The whole idea is to repeatedly go from a little hut where the temperature hovers around 80-90° C (176-194° F) to then jump through a hole you’ve cut in the ice into waters barely above freezing. A paper from the Mayo Clinic lists the health benefits of saunas in a rather lengthy paragraph, touching on everything from reducing inflammation to clearer skin to fighting the flu. 

But the benefits aren’t just physical. Estonia, which is just south of Finland, also has a strong sauna culture. A brilliant documentary by Anna Hints, “Smoke Sauna Sisterhood,” shows that the sauna can be a sacred space. As Estonia’s official submission to the Oscars, it’s in contention for a nomination.

Hints’ documentary shows that saunas can touch us on a deeply spiritual level, healing scars that can build up through our lives. There is something in the cycle of heat and cold that taps into inner truths. As Hints said in a recent interview, “With time, deeper, deeper layers of physical dirt start to come up to the surface, but also emotional dirt starts to come up to the surface.”

While I didn’t visit any saunas on my Edmonton trip, every time I ventured outside it was a hot-cold adventure. Everyone turns the thermostat up a little when it gets this cold, so you’re constantly going through doors where the temperature can swing 75 degrees (Celsius, 130 degrees Fahrenheit) in an instant. I don’t know if there’s a health benefit, but I can tell you it feels pretty damned good to get that warm welcome when you’re freezing your butt off.

Stay warm!