You Know What Government Agencies Need? Some AI

A few items on my recent to-do list  have necessitated dealing with multiple levels of governmental bureaucracy: regional, provincial (this being in Canada) and federal. All three experiences were, without exception, a complete pain in the ass. So, having spent a good part of my life advising companies on how to improve their customer experience, the question that kept bubbling up in my brain was, “Why the hell is dealing with government such a horrendous experience?”

Anecdotally, I know everyone I know feels the same way. But what about everyone I don’t know? Do they also feel that the experience of dealing with a government agency is on par with having a root canal or colonoscopy?

According to a survey conducted last year by the research firm Qualtrics XM, the answer appears to be yes. This report paints a pretty grim picture. Satisfaction with government services ranked dead last when compared to private sector industries.

The next question, being that AI is all I seem to have been writing about lately, is this: “Could AI make dealing with the government a little less awful?”

And before you say it, yes, I realize I recently took a swipe at the AI-empowered customer service used by my local telco. But when the bar is set as low as it is for government customer service, I have to believe that even with the limitations of artificially intelligent customer service as it currently exists, it would still be a step forward. At least the word “intelligent” is in there somewhere.

But before I dive into ways to potentially solve the problem, we should spend a little time exploring the root causes of crappy customer service in government.

First of all, government has no competitors. That means there are no market forces driving improvement. If I have to get a building permit or renew my driver’s license, I have one option available. I can’t go down the street and deal with “Government Agency B.”

Secondly, in private enterprise, the maxim is that the customer is always right. This is, of course, bullshit.  The real truth is that profit is always right, but with customers and profitability so inextricably linked, things generally work out pretty well for the customer.

The same is not true when dealing with the government. Their job is to make sure things are (supposedly) fair and equitable for all constituents. And the determination of fairness needs to follow a universally understood protocol. The result of this is that government agencies are relentlessly regulation bound and fixated on policies and process, even if those are hopelessly archaic. Part of this is to make sure that the rules are followed, but let’s face it, the bigger motivator here is to make sure all bureaucratic asses are covered.

Finally, there is a weird hierarchy that exists in government agencies.  Frontline people tend to stay in place even if governments change. But the same is often not true for their senior management. Those tend to shift as governments come and go. According to the Qualtrics study cited earlier, less than half (48%) of government employees feel their leadership is responsive to feedback from employees. About the same number (47%) feel that senior leadership values diverse perspectives.

This creates a workplace where most of the people dealing with clients feel unheard, disempowered and frustrated. This frustration can’t help but seep across the counter separating them from the people they’re trying to help.

I think all these things are givens and are unlikely to change in my lifetime. Still, perhaps AI could be used to help us navigate the serpentine landscape of government rules and regulations.

Let me give you one example from my own experience. I have to move a retaining wall that happens to front on a lake. In Canada, almost all lake foreshores are Crown land, which means you need to deal with the government to access them.

I have now been bouncing back and forth between three provincial ministries for almost two years to try to get a permit to do the work. In that time, I have lost count of how many people I’ve had to deal with. Just last week, someone sent me a couple of user guides that “I should refer to” in order to help push the process forward. One of them is 29 pages long. The other is 42 pages. They are both about as compelling and easy to understand as you would imagine a government document would be. After a quick glance, I figured out that only two of the 71 combined pages are relevant to me.

As I worked my way through them, I thought, “surely some kind of ChatGPT interface would make this easier, digging through the reams of regulation to surface the answers I was looking for. Perhaps it could even guide you through the application process.”

Let me tell you, it takes a lot to make me long for an AI-powered interface. But apparently, dealing with any level of government is enough to push me over the edge.

Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

AI Customer Service: Not Quite Ready For Prime Time

I had a problem with my phone, which is a landline (and yes, I’ve heard all the smartass remarks about being the last person on earth with a landline, but go ahead, take your best shot).

The point is, I had a problem. Actually, the phone had a problem, in that it didn’t work. No tone, no life, no nothing. So that became my problem.

What did I do? I called my provider (from my cell, which I do have) and after going through this bizarre ID verification process that basically stopped just short of a DNA test, I got routed through to their AI voice assistant, who pleasantly asked me to state my problem in one short sentence.

As soon as I heard that voice, which used the same dulcet tones as Siri, Alexa and the rest of the AI Geek Chorus, I knew what I was dealing with. Somewhere at a board table in the not-too-distant past, somebody had come up with the brilliant idea of using AI for customer service. “Do you know how much money we could save by cutting humans out of our support budget?” After pointing to a chart with a big bar and a much smaller bar to drive the point home, there would have been much enthusiastic applause and back-slapping.

Of course, the corporate brain trust had conveniently forgotten that they can’t cut all humans out of the equation, as their customers still fell into that category.  And I was one of them, now dealing face to face with the “Artificially Intelligent” outcome of corporate cost-cutting. I stated my current state of mind more succinctly than the one short sentence I was instructed to use. It was, instead, one short word — four letters long, to be exact. Then I realized I was probably being recorded. I sighed and thought to myself, “Buckle up. Let’s give this a shot.”

I knew before starting that this wasn’t going to work, but I wasn’t given an alternative. So I didn’t spend too much time crafting my sentence. I just blurted something out, hoping to bluff my way to the next level of AI purgatory. As I suspected, Ms. AI was stumped. But rather than admit she was scratching her metaphysical head, she repeated the previous instruction, preceded by a patronizing “pat on my head” recap that sounded very much like it was aimed at someone with the IQ of a soap dish. I responded again with my four-letter reply — repeated twice, just for good measure.

Go ahead, record me. See if I care.

This time I tried a roundabout approach, restating my issue in terms that hopefully could be parsed by the cybernetic sadist that was supposedly trying to help me. Needless to say, I got no further. What I did get was a helpful text with all the service outages in my region. Which I knew wasn’t the problem. But no one asked me.

I also got a text with some troubleshooting tips to try at home. I had an immediate flashback to my childhood, trying to get my parents’ attention while they were entertaining friends at home, “Did you try to figure it out yourself, Gordie? Don’t bother Mommy and Daddy right now. We’re busy doing grown up things. Run along and play.”

At this point, the scientific part of my brain started toying with the idea of making this an experiment. Let’s see how far we can push the boundaries of this bizarre scenario: equally frustrating and entertaining. My AI tormenter asked me, “Do you want to continue to try to troubleshoot this on the phone with me?”

I was tempted, I really was. Probably by the same part of my brain that forces me to smell sour milk or open the lid of that unidentified container of green fuzz that I just found in the back of the fridge.  And if I didn’t have other things to do in my life, I might have done that. But I didn’t. Instead, in desperation I pleaded, “Can I just talk to a human, please?”

Then I held my breath. There was silence. I could almost hear the AI wheels spinning. I began to wonder if some well-meaning programmer had included a subroutine for contrition. Would she start pleading for forgiveness?

After a beat and a half, I heard this, “Before I connect you with an agent, can I ask you for a few more details so they’re better able to help you?” No thanks, Cyber-Sally, just bring on a human, posthaste! I think I actually said something to that effect. I might have been getting a little punchy in my agitated state.

As she switched me to my requested human, I swore I could hear her mumble something in her computer-generated voice. And I’m pretty sure it was an imperative with two words, the first a verb with four letters, the second a subject pronoun with three letters.

And, if I’m right, I may have newfound respect for AI. Let’s just call it my version of the Turing Test.

Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

Fooling Some of the Systems Some of the Time

If there’s a system, there’s a way to game it. Especially when those systems are tied to someone making money.

Buying a Best Seller

Take publishing, for instance. New books that say they are on the New York Times Best-Seller List sell more copies than ones that don’t make the list. A 2004 study by University of Wisconsin economics professor Alan Sorenson found the bump is about 57%. That’s; certainly motivation for a publisher to game the system.

There’s also another motivating factor. According to a Times op-ed, Michael Korda, former editor in chief of Simon and Schuster, said that an author’s contract can include a bonus of up to $100,000 for hitting No. 1 on the list.

This amplifying effect is not a one-shot deal. Make the list for just one week, in any slot under any category, and you can forever call yourself a “NY Times bestselling author,” reaping the additional sales that that honor brings with it. Given the potential rewards, you can guarantee that someone is going to be gaming the system.

And how do you do that? Typically, by doing a bulk purchase through an outlet that feeds its sales numbers to TheTimes. That’s what Donald Trump Jr. and his publisher did for   his book “Triggered,” which hit No. 1 on its release in November of 2019, according to various reports.  Just before the release, the Republican National Committee reportedly placed a $94,800 order with a bookseller, which would equate to about 4,000 books, enough to ensure that “Triggered” would end up on the Times list. (Note: The Times does flag these suspicious entries with a dagger symbol when it believes that someone may be potentially gaming the system by buying in bulk.)

But it’s not only book sales where you’ll find a system primed for rigging. Even those supposedly objective 5-star buyer ratings you find everywhere have also been gamed.

5-Star Scams

A 2021 McKinsey report said that, depending on the category, a small bump in a star rating on Amazon can translate into a 30% to 200% boost in sales. Given that potential windfall, it’s no surprise that you’ll find fake review scams proliferate on the gargantuan retail platform.

A recent Wired exposé on these fake reviews found a network that had achieved a level of sophistication that was sobering. It included active recruitment of human reviewers (called “Jennies” — if you haven’t been recruited yet, you’re a “Virgin Jenny”) willing to write a fake review for a small payment or free products. These recruitment networks include recruiting agents in locations including Pakistan, Bangladesh and India working for sellers from China.

But the fake review ecosystem also included reviews cranked out by AI-powered automated agents. As AI improves, these types of reviews will be harder to spot and weed out of the system.

Some recent studies have found that, depending on the category, over one-third of the reviews you see on Amazon are fake. Books, baby products and large appliance categories are the worst offenders.

Berating Ratings…

Back in 2014, Itamar Simonson and Emanuel Rosen wrote a book called “Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information.” Spoiler alert: they posited that consumer reviews and other sources of objective information were replacing traditional marketing and branding in terms of what influenced buyers.

They were right. The stats I cited above show how powerful these supposedly objective factors can be in driving sales. But unfortunately, thanks to the inevitable attempts to game these systems, the information they provide can often be far from perfect.

A Column About Nothing

What do I have to say in my last post for 2023? Nothing.

Last week, I talked about the cost of building a brand. Then, this week, I (perhaps being the last person on earth to do so) heard about Nothing.  No – not small “n” nothing as in the absence of anything – Big “N” Nothing as in the London based tech start-up headed by Chinese born entrepreneur Carl Pei.

Nothing, according to their website, crafts “intuitive, flawlessly connected products that improve our lives without getting in the way. No confusing tech-speak. No silly product names. Just artistry, passion and trust. And products we’re proud to share with our friends and family. Simple.”

Now, just like the football talents of David Beckham I explored in my last post, the tech Nothing produces is good – very good – but not uniquely good. The Nothing phone (1) and the just released Nothing Phone (2) are capable mid-range smart phones. Again, from the Nothing website, you are asked to “imagine a world where all your devices are seamlessly connected.”

It may just be me, but isn’t that what Apple has been promising (and occasionally delivering) for the better part of the last quarter century? Doesn’t Google make the same basic promise? Personally, I see nothing earth shaking in Nothing’s mission. It all feels very “been there, done that.” Or, if you’ll allow me – it all seems like much ado about Nothing (sorry). Yet people have paid thousands over the asking price when the 100 units of the first Nothing phone were put up for auction prior to its public launch.

Why?  Because of the value of the Nothing brand. And that value comes from one place. No, not the tech. The community. Pei may be a pretty good building of phones, but he’s an even better building of community. He has expertly built a fan base who love to rave about Nothing. On the “Community” section of the Nothing Website, you’re invited to “abandon the glorification of I and open up to the potential of We.”  I’m not sure exactly what that means, but it all sounds very cool and idealistic, if a little vague.

Another genius move by Pei was to open up to the potential of Nothing. In what is probably a latent (or perhaps not so latent) backlash against over advertising and in-your-face branding, we were eager to jump on the Nothing bandwagon. It seems like anti-branding, but it’s not. It’s actually expertly crafted, by-the-book branding. Just like Seinfeld, a show about nothing that became one of the most popular tv shows in history, it has been shown that there is some serious branding swagger to the concept of nothing. I can’t believe no one thought to stake a claim to this branding goldmine before now.

The Spark in the Jar: Jon Ive and Steve Jobs

I sold all my Apple stock shortly after Steve Jobs passed away. It was premature (which is another word for stupid). Apple stock is today worth about 10 times what I sold it for.

My reasoning was thus: Apple couldn’t function without Steve Jobs – not for long, anyway.

Well, 12 years later, it’s doing quite well, thank you. It has a stock price of almost $200 per share (as of the writing of this). Sales have never been stronger. While replacement CEO Tim Cook is no Steve Jobs, financially he has grown Apple into a monolithic force with a market capitalization of almost 3 trillion dollars. There is no other company even close to that.

Now, with the benefit of hindsight, I realize I underestimated Tim Cook. But I stand with my original instinct: whatever Apple was under Steve Jobs, it couldn’t survive without him. And to understand why, let’s take a quick look back.

Jobs was infamously ousted from Apple in 1985. He remained in “NeXTile” for 12 years, coming back in 1997 to lead Apple into what many believe was its Golden Era. He passed away in 2011.

In the 14 years Jobs led Apple in his second run, the stock price went from about 20 cents to about 12 dollars. That’s growth of about 6000%.  Steve Jobs brought Apple back from the brink of death. If it wasn’t for a lifeline thrown to it by its number one competitor, Microsoft, in 1997, Apple would be no more. As Jobs himself said, “Apple was in very serious trouble,” said Jobs. “And what was really clear was that if the game was a zero-sum game where for Apple to win, Microsoft had to lose, then Apple was going to lose.

But those growth numbers are a little misleading. For you to be one of the fastest growing companies in history, it helps when you start with a very, very small number. A share price of $0.20 is a very, very small number.

Much as everyone lauds Steve Jobs for the turnaround of Apple, I would argue that Tim Cooks performance is even more impressive. To say that Apple was already on a roll when Cook took over is an understatement. In 2011, Apple was going from success to success and could seem to do no wrong. That was one of the reasons I was pessimistic about its future. I thought it couldn’t sustain its run, especially when it came to introducing new products. How many Jobs inspired home runs could it possibly have in its pipeline?

But what Tim Cook was great at was logistics. He took that pipeline and managed to squeeze out another decade plus of value building thanks to what may be the best supply chain strategy in the world. Analysts have said that half of Apple’s 3 trillion dollars in value is directly attributable to that supply chain.

But when you squeeze every last inch of efficiency out of a supply chain, something has to give. And in this case, it may have been creativity.

The Job’s era Apple was a very rare and delicate thing in the corporate world: a leader who was uncompromising on user experience and a design team able to rise and meet the challenge. Was it dictorial? Absolutely. Was it magical? Almost always. It was like catching a spark in a jar.

That design team was headed by Jonathon Ive. And when you have a team that’s the absolute best in the world, you can put up with an asshole here and there, especially when that asshole keeps challenging you to be better.  And when you keep delivering.

The alchemy that made Apple spectacularly successful from 1996 to 2011 was a fragile thing. It wouldn’t take much to change the formula forever. For example, if you removed the catalyst – which was Steve Jobs – it couldn’t survive. But equally important to that formula was Jon Ive.

As David Price, the editor of Macworld said,

“What Ive brought to Apple was a coherent personal vision. That doesn’t mean Apple’s designs on his watch were always perfect, of course; there were plenty of missteps. In broader terms, his arch-minimalism could be frustrating for those who wanted more physical controls”

David Price, Macworld

Ive and Jobs were, by all accounts, inseparable. In a heartfelt tribute to Jobs published shortly after his passing, Ive remembered,

“We worked together for nearly 15 years. We had lunch together most days and spent our afternoons in the sanctuary of the design studio. Those were some of the happiest, most creative and joyful times of my life,” Ive wrote. “I loved how he saw the world. The way he thought was profoundly beautiful.”

Jon Ive

For Jobs and Ive – “Think Different” was both a manifesto and a mantra. That philosophy started a not-so-slow death the minute Jobs passed from this earth. Finally, in June 2019, Ive announced his departure “after years of frustration, seeing the company migrate from a design-centric entity to one that was more utilitarian.”

It seems that companies can excel at either creativity or execution. It’s very difficult – perhaps impossible – to do both. The Apple of Steve Jobs was the world’s most creative corporation. The Apple of Tim Cook is a world leader in execution. But for one to happen, the other had to make room. Today, Apple is trying to be creative by committee. Macworld’s David Price mourns the Apple that was, “Maybe Apple is no longer a company that focuses on individual personality, or indeed on thinking different. This week we also got the news that Ive’s replacement will not be replaced, with a core group of 20 designers instead reporting directly to the chief operating officer, who is no stranger to design and likely has his own ideas. If design by committee has been the de facto approach for the past four years, it’s now been made official.”

And committees always suck all the oxygen from the room. In that atmosphere, the spark that once was Apple inevitably had to go out.

Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

Deconstructing a Predatory Marketplace

Last week, I talked about a predatory ad market that was found in — of all places — in-game ads. And the predators are — of all things — the marketers of Keto Gummies. This week, I’d like to look at why this market exists, and why someone should do something about it.

First of all, let’s understand what we mean by “predatory.” In biological terms, predation is a zero-sum game. For a predator to win, someone has to lose.  On Wikipedia, it’s phrased a little differently: “Predatory marketing campaigns may (also) rely on false or misleading messaging to coerce individuals into asymmetrical transactions. “

 “Asymmetrical” means the winner is the predator, the loser is the prey.

In the example of the gummy market, there are three winners — predators — and three losers, or prey. The winners are the marketers who are selling the gummies, the publishers who are receiving the ad revenue and the supply side platform that mediates the marketplace and take its cut.

The losers — in ascending order of loss — are the users of the games who must suffer through these crappy ads, the celebrities who have had their names and images illegally co-opted by the marketer, and the consumers who are duped into actually buying a bottle of these gummies.

You might argue the order of the last two, depending on what value you put on the brand of the celebrity. But in terms of sheer financial loss, consumer fraud is a significant issue, and one that gets worse every year.  In February, the Federal Trade Commission reported that U.S. consumers lost $8.8 billion to scams last year, many of which occurred online. The volume of scams is up 30% over 2021, and is 70% higher than it was in 2020.

So it’s not hard to see why this market is predatory. But is it fraudulent? Let’s apply a legal litmus test. Fraud is generally defined as “any form of dishonest or deceptive behavior that is intended to result in financial or personal gain for the fraudster, and does harm to the victim.”

Based on this, fraud does seem to apply. So why doesn’t anyone do anything?

For one, we’re talking about a lot of potential money here. Statista pegs the in-game ad market at $32.5 billion worldwide in 2023, with projected annual growth rate of 9.10% That kind of money provides a powerful incentive to publishers and supply-side platforms (SSPs) to look the other way.

I think it’s unreasonable expect the marketers of the gummies to police themselves. They have gone to great pains to move themselves away from the threat of legal litigation. These corporations are generally registered in jurisdictions like China or Cyprus, where legal enforcement of copyright or consumer protections are nonexistent. If someone like Oprah Winfrey has been unable to legally shut down the fraudulent use of her image and brand for two years, you can bet the average consumer who has been ripped off has no recourse. 

But perhaps one of the winners in this fraudulent ecosystem — the SSPs – should consider cracking down on this practice.

In nature, predators are kept in check by something called a predator-prey relationship. If predators become too successful, they eliminate their prey and seal their own doom. But this relationship only works if there are no new sources of prey. If we’re talking about an ecosystem that constantly introduces new prey, nothing keeps predators in check.

Let’s look at the incentive for the game publishers to police the predators. True, allowing fraudulent ads does no favours for the users of their game. A largescale study by Gao, Zeng, Lu et al found that bad ads lead to a bad user experience.

But do game publishers really care? There is no real user loyalty to games, so churn and burn seems to be the standard operating procedure. This creates an environment particularly conducive to predators.

So what about the SSPs?

GeoEdge, an ad security solution that guards against malvertising, among other things, has just released its Q1 Ad Quality Report. In an interview, Yuval Shiboli, the company’s director of product market, said that while malicious ads are common across all channels, in-game advertising is particularly bad because of a lack of active policing: “The fraudsters are very selective in who they show their malicious ads, looking for users who are scam-worthy, meaning there is no security detection software in the environment.”

Quality of advertising is usually directly correlated with the pricing of the ad inventory. The cheaper the ad, the poorer the quality. In-game ads are relatively cheap, giving fraudulent predators an easy environment to thrive in. And this entire environment is created by the SSPs.

According to Shiboli, it’s a little surprising to learn who are the biggest culprits on the SSP side: “Everybody on both the sell side and buy side works with Google, and everyone assumes that its platforms are clean and safe. We’ve found the opposite is true, and that of all the SSP providers, Google is the least motivated to block bad ads.”

By allowing — even encouraging — a predatory marketplace to exist, Google and other SSPs are doing nothing less than aiding and abetting criminals. In the short term, this may add incrementally to their profits, but at what long-term price?

Real Life Usually Lives Beyond The Data

There’s an intriguing little show you’ve probably never heard of on Netflix that might be worth checking out. It’s called Travelers and it’s a Canadian produced Sci-Fi show that ran from 2016 to 2018. The only face in it you’re probably recognize is Eric McCormack, the Will from Will and Grace. He also happens to be the producer of the series.

The premise is this – special operatives from the future (the “travelers”) – travel back in time to the present to prevent the collapse of society. They essential “body snatch” everyday people from our present at the exact moment of their death and use their lives as a cover to fulfill their mission.

And that’s not even the interesting part.

The real intrigue of the show comes from the everyday conflicts which come from an imperfect shoe horning of a stranger into the target’s real-world experience. The show runners do a masterful job of weaving this into their storylines: the joy of eating a hamburger, your stomach turning at the thought of drinking actual milk from a cow, calling your “wife” her real name when you haven’t called her that in all the time you’ve known her.  And it’s in this that I discovered an unexpected parallel to our current approach to marketing.

This is a bit of a detour, so bear with me.

In the future, the research team compiles as much as they can about each of the people they’re going to “borrow” for their operatives. The profiles are compiled from social media, public records and everything they can discover from the data available.

But when the “traveler” actually takes over their life, there are no end of surprises and challenges – made up of all the trivial stuff that didn’t make it into the data profile.

You probably see where I’m going with this. When we rely solely on data to try to understand our customers or prospects, there will always be surprises. You can only learn these little quirks and nuances by diving into their lives.

That’s what A.G. Lafley, CEO of Proctor and Gamble from 2000 to 2010 and then again from 20153 to 2015, knew. In a profile on Lafley which Forbes did in 2002, writer Luisa Kroll said,

“Like the monarch in Mark Twain’s A Connecticut Yankee in King Arthurs’ Court, Lafley often makes house calls incognito to find out what’s the minds of his subjects. ‘Too much time was being spent inside Procter & Gamble and not enough outside,’ says Lafley who took over during a turbulent period two years ago. ‘I am a broken record when it comes to saying, ‘We have to focus on the customer.'”

It wasn’t a bad way to run a business. Under Lafley’s guidance, P&G doubled their market cap, making them one of the 10 most valuable companies in the world.

Humans are messy and organic. Data isn’t. Data demands to be categorized, organized and columnized. When we deal with data, we necessarily have to treat it like data. And when we do that, we’re going to miss some stuff – probably a lot of stuff. And almost all of it will be the stuff of our lives, the things that drive behavior, the sparks that light our emotions.

It requires two different ways of thinking. Data sits in our prefrontal lobes, demanding the brain to be relentlessly rational. Data reduces behavior to bits and bytes, to be manipulated by algorithms into plotted trendlines and linear graphs. In fact, automation today can totally remove we humans from the process. Data and A.I. work together to pull the levers and push the buttons on our advertising strategies. We just watch the dashboard.

But there’s another way of thinking – one that skulks down in the brain’s subcortical basement, jammed in the corner between the amygdala and the ventral striatum. It’s here where we stack all the stuff that makes us human; all the quirks and emotions, all our manias and motivations. This stuff is not rational, it’s not logical, it’s just life.

That’s the stuff A.G. Lafley found when he walked out the front door of Proctor and Gamble’s headquarters in Cincinnati and into the homes of their customers. And that’s the stuff the showrunners of Travelers had the insight to include in their narratives.

It’s the stuff that can make us sensational or stupid – often at the same time.