Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

What If We Let AI Vote?

In his bestseller Homo Deus – Yuval Noah Harari thinks AI might mean the end of democracy. And his reasoning for that comes from an interesting perspective – how societies crunch their data.

Harari acknowledges that democracy might have been the best political system available to us – up to now. That’s because it relied on the wisdom of crowds. The hypothesis operating here is that if you get enough people together, each with different bits of data, you benefit from the aggregation of that data and – theoretically – if you allow everyone to vote, the aggregated data will guide the majority to the best possible decision.

Now, there are a truckload of “yeah, but”s in that hypothesis, but it does make sense. If the human ability to process data was the single biggest bottle neck in making the best governing decisions, distributing the processing amongst a whole bunch of people was a solution. Not the perfect solution, perhaps, but probably better than the alternatives. As Winston Churchill said, “it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…’

So, if we look back at our history, democracy seems to emerge as the winner. But the whole point of Harari’s Homo Deus is to look forward. It is, he promises, “A Brief History of Tomorrow.” And that tomorrow includes a world with AI, which blows apart the human data processing bottle neck: “As both the volume and speed of data increase, venerable institutions like elections, parties and parliaments might become obsolete – not because they are unethical, but because they don’t process data efficiently enough.”

The other problem with democracy is that the data we use to decide is dirty. Increasingly, thanks to the network effect anomalies that come with social media, we are using data that has no objective value, it’s simply the emotional effluent of ideological echo chambers. This is true on both the right and left ends of the political spectrum. Human brains default to using available and easily digestible information that happens to conform to our existing belief schema. Thanks to social media, there is no shortage of this severely flawed data.

So, if AI can process data exponentially faster than humans, can analyze that data to make sure it meets some type of objectivity threshold, and can make decisions based on algorithms that are dispassionately rational, why shouldn’t we let AI decide who should form our governments?

Now, I pretty much guarantee that many of you, as you’re reading this, are saying that this is B.S. This will, in fact, be humans surrendering control in the most important of arenas. But I must ask in all seriousness, why not? Could AI do worse than we humans do? Worse than we have done in the past? Worse than we might do again in the very near future?

These are exactly the type of existential questions we have to ask when we ponder our future in a world that includes AI.

It’s no coincidence that we have some hubris when it comes to us believing that we’re the best choice for being put in control of a situation. As Harari admits, the liberal human view that we have free will and should have control of our own future was really the gold standard. Like democracy, it wasn’t perfect, but it was better than all the alternatives.

The problem is that there is now a lot of solid science that indicates that our concept of free will is an illusion. We are driven by biological algorithms which have been built up over thousands of years to survive in a world that no longer exists. We self-apply a thin veneer of ration and free will at the end to make us believe that we were in control and meant to do whatever it was we did. What’s even worse, when it appears we might have been wrong, we double down on the mistake, twisting the facts to conform to our illusion of how we believe things are.

But we now live in a world where there is – or soon will be – a better alternative. One without the bugs that proliferate in the biological OS that drives us.

As another example of this impending crisis of our own consciousness, let’s look at driving.

Up to now, a human was the best choice to drive a car. We were better at it than chickens or chimpanzees. But we are at the point where that may no longer be true. There is a strong argument that – as of today – autonomous cars guided by AI are safer than human controlled ones. And, if the jury is still out on this question today, it is certainly going to be true in the very near future. Yet, we humans are loathe to admit the inevitable and give up the wheel. It’s the same story as making our democratic choices.

So, let’s take it one step further. If AI can do a better job than humans in determining who should govern us, it will also do a better job in doing the actual governing. All the same caveats apply. When you think about it, democracy boils down to various groups of people pointing the finger at those chosen by other groups, saying they will make more mistakes than our choice. The common denominator is this; everyone is assumed to make mistakes. And that is absolutely the case. Right or left, Republican or Democrat, liberal or conservative, no matter who is in power, they will screw up. Repeatedly.

Because they are, after all, only human.

Greetings from the Great, White (Frozen) North

This post comes to you from Edmonton, Alberta, where the outside temperature right now is minus forty degrees Celsius. If you’re wondering what that is in Fahrenheit, the answer is, “It doesn’t matter.” Minus forty is where the two scales match up.

If you add a bit of a breeze to that, you get a windchill factor that makes it feel like minus fifty Celsius (-58° F). The weather lady on the morning news just informed me that at that temperature, exposed flesh freezes in two to five minutes. Yesterday, an emergency alert flashed on my phone warning us that Alberta’s power grid was overloaded and could collapse under the demand, causing rotating power outages.

I don’t know about you, but I don’t think anyone should live in a place where winter can kill you. Nothing works as it should when it gets this cold, humans included. And yet, Albertans are toughing it out. I noticed that when it gets this cold, the standard niceties that people say change. Instead of telling me to “have a nice day,” everyone has been encouraging me to “stay warm.”

There’s a weird sort of bonding that happens when the weather becomes the common enemy. Maybe we all become brothers and sisters in arms, struggling to survive against the elements. It got me to wondering: Is there a different sense of community in places where it’s really cold in the winter?

When I asked Google which countries had the strongest social ties, it gave me a list of nine: Finland, Norway, Canada, Denmark, Switzerland, Australia, Netherlands, Iceland and Italy. Seven of those places have snowy, cold winters. If you look at countries that have strong social democracies — governments established around the ideal of the common good — again, you’ll find that most of them are well north (or south, in the case of New Zealand) of the equator.

But let’s leave politics aside. Maybe it’s just the act of constantly transitioning from extreme cold to warm and cozy places where there’s a friendly face sincerely wishing you’ll “stay warm” that builds stronger social bonds. As I mentioned in a previous post, the Danes even have a name for it: hygge. It translates loosely to “coziness.”

There are definitely physical benefits to going from being really cold to being really warm. The Finns discovered this secret thousands of years ago when they created the sauna. The whole idea is to repeatedly go from a little hut where the temperature hovers around 80-90° C (176-194° F) to then jump through a hole you’ve cut in the ice into waters barely above freezing. A paper from the Mayo Clinic lists the health benefits of saunas in a rather lengthy paragraph, touching on everything from reducing inflammation to clearer skin to fighting the flu. 

But the benefits aren’t just physical. Estonia, which is just south of Finland, also has a strong sauna culture. A brilliant documentary by Anna Hints, “Smoke Sauna Sisterhood,” shows that the sauna can be a sacred space. As Estonia’s official submission to the Oscars, it’s in contention for a nomination.

Hints’ documentary shows that saunas can touch us on a deeply spiritual level, healing scars that can build up through our lives. There is something in the cycle of heat and cold that taps into inner truths. As Hints said in a recent interview, “With time, deeper, deeper layers of physical dirt start to come up to the surface, but also emotional dirt starts to come up to the surface.”

While I didn’t visit any saunas on my Edmonton trip, every time I ventured outside it was a hot-cold adventure. Everyone turns the thermostat up a little when it gets this cold, so you’re constantly going through doors where the temperature can swing 75 degrees (Celsius, 130 degrees Fahrenheit) in an instant. I don’t know if there’s a health benefit, but I can tell you it feels pretty damned good to get that warm welcome when you’re freezing your butt off.

Stay warm!

When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?

When the News Hits Home

My, how things have changed.

My intention was to write a follow up to last week’s post about Canada’s Bill C-18 and Meta’s banning of news on Facebook. I suppose this is a follow up of sorts. But thanks to Mother Nature – that ofttimes bully – that story was pushed right out of the queue to be replaced with something far more tragic and immediate.

To me, anyway.

I live in Kelowna. Chances are you’ve heard about my home in the last few days. If you haven’t, I can tell you that when I look out my window, all I can see is thick smoke. Which may be a good thing. Last Friday, when I could see, I spent the entire evening watching West Kelowna, across Okanagan Lake from my home, burn in the path of the oncoming McDougall Creek Wildfire. As the flames would suddenly leap towards the sky, you knew that was someone’s home being ignited.

We don’t know how many homes have been lost. The fire has been too active for authorities to have the time to count. We have firefighters and first responders pouring in from around our province to help. . Our Air Quality Index is 11 on a scale of 10, as bad as it can get. Thousands are out of their home. More thousands have their things packed by the door, ready to leave at a moment’s notice. We’re one of those.

But that’s enough about the fire. This post is about our weird relationship with the news.

When something like this happens, you have a very real, very visceral need to know what’s going on. For those of us that live here in British Columbia, the news has hit home in a way we could never imagine. A few posts ago, I said it might be healthier for me to ignore the news, because it’s always alarming and very seldom relevant to me. Well, those words are now coming back to haunt me.

This disaster has thrown our reliance on Facebook for new into stark relief. This last Friday, Canada’s Transportation Minster, Pablo Rodriguez, asked Meta to reverse its current ban on news, “We’ve seen that, throughout this emergency, Canadians have not had access to the crucial information they need. So, I ask Meta to reverse its decision, allow Canadians to have access to news on their platforms.”

But there’s another dimension to this that’s a bit more subtle yet even more frightening. It goes to the heart of how we handle crisis. I think you necessarily must “zoom in,” performing some type of terrible triage in your mind to be able to imagine the unimaginable. As the winds shift the fire away from your home, there’s relief. But other homes now lie in the path of the fire. In your head, you know that, but emotionally you can’t help but feel a lift. It’s not noble, but it’s human.

So let’s “zoom out” – a lot. We’re not the only ones this is happening to. This is a global crisis. Twenty-six thousand people are evacuated on the Spanish island of Tenerife. A friend of mine, who’s an airline pilot, was one week ago volunteering to fly people out of Maui who had lost their homes in the tragic Lahaina fire.

Take a look at Nasa’s FIRMS (Fire Information for Resource Management) website, which gives a global map of all hotspots from wildfires burning. I’ve set this link to wildfire activity in the last 7 days.

Scary as hell, right?

But can we actually process that, in a way that lets us move forward and start coping with this massive issue? Is it enough to change our behaviors in the way we must to finally start addressing climate change?

In a recent article on BBC.com, Richard Fisher talks about “Construal level theory” – which says that the greater the psychological distance there is between the news and your life, the less likely it is to make you change your behavior. For me, the psychological distance between myself and climate change is roughly 1 kilometer (just over half a mile) as the crow flies. That’s how far it is from my house to the nearest evacuation alert area.

It doesn’t get much closer than that.  But will we change? Will anything change?

I’m not so sure. We’ve been through this before. Exactly 20 years ago, the Okanagan Mountain wildfire raged through Kelowna, displacing over 30,000 people and destroying 239 homes. It was a summer much like this, at the time the driest summer on record. This year, we have smashed that record, as we have many times since that fire. Once we picked up, rebuilt our homes and got back to life, nothing really changed.

And now, here we are again. Let’s hope that this time is different.

No News is Good News

I’m trying not to pay too much attention to the news. This is partly because I’m exhausted by the news, and partly because of the sad state of journalism today.

This isn’t just a “me” thing. Almost everyone I talk to says they’re trying to find coping mechanisms to deal with the news. The News industry – and its audience – has gone from being an essential part of a working democracy to something that is actually bad for you.  In an online essay from 4 years, Swiss author Rolf Dobelli equates news consumption to a bad diet:

“(translated from its original German) News is to the mind what sugar is to the body. News is appetizing, easily digestible and at the same time highly harmful. The media feeds us morsels of trivial stories, tidbits that by no means satisfy our hunger for knowledge. Unlike with books and long, well-researched, long articles, there is no saturation when consuming news. We can devour unlimited amounts of messages; they remain cheap sugar candies. As with sugar, the side effects only show up with a delay.”

Rolf Dobelli, 2019

This alarming state is due to the fact that the News (in the US) is supported by advertising, which means it has a ravenous appetite for eyeballs. Because of this, it is highly profitable to make news addictive.

This creates a state, as Dobelli points out, where even though the news is highly inflammatory, like a constantly jangling alarm bell, almost all the news we consume is irrelevant to our daily lives. While the news we watch pushes all our hot buttons, it doesn’t serve a useful purpose. In fact, it does the exact opposite: it leads to chronic mental and physical ill-being and may cause us to start ignoring the warning signs we should be paying attention to.

A study last year (McLaughlin, Gotlieb and Mills) found ties between problematic news consumption and mental ill-being. The study found that 16.5% of 1,100 people polled in an online survey showed signs of “severely problematic” news consumption, which led them to focus less on school, work and family, and contributed to an inability to sleep.

Dobelli’s essay goes even further, pointing a finger at excessive news consumption as the cause of a list of issues including cognitive errors, inhibiting deeper thinking, wasting time, killing creativity, making us more passive and even wiring our brains for addiction in a manner similar to drugs.

All these negative side effects come from chronic stress – a constant and pervasive alarmed state that excessive news consumption puts our brains into. And if you thought Dobelli’s list was scary, wait until you see the impact of chronic stress! It actually attacks the brain by releasing excessive amounts of cortisol and restricting the uptake of serotonin, which can increase inflammation, lead to depression, shrink your hippocampus and impact your memory, make it difficult to sleep and impair your ability to think rationally.

To put a new twist on an old saying, “No news is good news.”

But let’s put aside for a moment the physical and mental toll that news takes on us. Even if none of that were true, our constant diet of bad news can also lead to something known as “alarm fatigue.”

Alarm fatigue is essentially our response to the proverbial boy who calls wolf. After several false alarms, we stop paying attention. And on that one time when we should be paying attention, we are caught with our guard down.

There is one other problem with our news diet: it oversimplifies complex problems into simple sound bites. Thomas Jefferson said, “An educated citizenry is a vital requisite for our survival as a free people.” But when the news abdicates its role as an informer to pursue profit as entertainment, it is no longer educating us. It is pandering to us by stuffing bite sized opinion pieces that reinforce our beliefs – right or wrong. We are never challenged to examine our beliefs or explore the complexity of the wicked problems that confront us. Real journalism has been replaced by profitable punditry.

All this leaves us with a choice. Until the News industry cleans up its act (I’m not holding my breath), you’re likely far better off to ignore it. Or at least, ignore the profit driven platforms that are hungry for eyeballs. Stay informed by turning to books, long articles and true investigative journalism. That’s what I’m going to start doing.

Failing all that, just think about things. I understand it’s good for you.

Why I’m Worried About AI

Even in my world, which is nowhere near the epicenter of the technology universe, everyone is talking about AI And depending on who’s talking – it’s either going to be the biggest boon to humanity, or it’s going to wipe us out completely. Middle ground seems to be hard to find.

I recently attended a debate at the local university about it. Two were arguing for AI, and two were arguing against. I went into the debate somewhat worried. When I walked out at the end of the evening, my worry was bubbling just under the panic level.

The “For” Team had a computer science professor – Kevin Leyton-Brown, and a philosophy professor – Madeleine Ransom. Their arguments seemed to rely mainly on creating more leisure time for us by freeing us from the icky jobs we’d rather not do. Leyton-Brown did make a passing reference to AI helping us to solve the many, many wicked problems we face, but he never got into specifics.

“Relax!” seemed to be the message. “This will be great! Trust us!”

The “Against” Team was comprised of a professor in Creative and Critical Studies – Bryce Traister. As far as I could see, he seemed to be mainly worried about AI replacing Shakespeare. He did seem quite enamored with the cleverness of his own quips.

It was the other “Against” debater who was the only one to actually talk about something concrete I could wrap my head around. Wendy Wong is a professor of Political Science. She has a book on data and human rights coming out this fall. Many of her concerns focused on this area.

Interestingly, the AI debaters all mentioned Social Media in their arguments. And on this point, they were united. All the debaters agreed that the impact of Social Media has been horrible. But the boosters were quick to say that AI is nothing like Social Media.

Except that it is. Maybe not in terms of the technology that lies beneath it, but in terms of the unintended consequences it could unleash, absolutely! Like Social Media, what will get us with AI are the things we don’t know we don’t know.

I remember when social media first appeared on the scene. Like AI, there were plenty of evangelists lining up saying that technology would connect us in ways we couldn’t have imagined. We were redefining community, removing the physical constraints that had previously limited connections.

If there was a difference between social media and AI, it was that I don’t remember the same doomsayers at the advent of social media. Everyone seemed to be saying “This will be great! Trust us!”

Today, of course, we know better. No one was warning us that social media would divide us in ways we never imagined, driving a wedge down the ideological middle of our society. There were no hints that social media could (and still might) short circuit democracy.

Maybe that’s why we’re a little warier when it comes to AI. We’ve already been fooled once.

I find that AI Boosters share a similar mindset – they tend to be from the S.T.E.M. (Science, Technology, Engineering and Math) School of Thought. As I’ve said before, these types of thinkers tend to mistake complex problems for complicated ones. They think everything is solvable, if you just have a powerful enough tool and apply enough brain power. For them, AI is the Holy Grail – a powerful tool that potentially applies unlimited brain power.

But the dangers of AI are hidden in the roots of complexity, not complication, and that requires a different way of thinking. If we’re going to get some glimpse of what’s coming our way, I am more inclined to trust the instincts of those that think in terms of the humanities. A thinker, for example, such as Yuval Noah Harari, author of Sapiens.

Harari recently wrote an essay in the Economist that may be the single most insightful thing I’ve read about the dangers of AI: “AI has gained some remarkable abilities to manipulate and generate language, whether with words, sounds or images. AI has thereby hacked the operating system of our civilisation.”

In my previous experiments with ChatGPT, it was this fear that was haunting me. Human brains operate on narratives. We are hard-wired to believe them. By using language, AI has a back door into our brains that bypass all our protective firewalls.

My other great fear is that the development of AI is being driven by for-profit corporations, many of which rely on advertising as their main source of revenue. If ever there was a case of putting the fox in charge of the henhouse, this is it!

When it comes to AI it’s not my job I’m afraid of losing. It’s my ability to sniff out AI generated bullshit. That’s what’s keeping me up a night.

Deconstructing a Predatory Marketplace

Last week, I talked about a predatory ad market that was found in — of all places — in-game ads. And the predators are — of all things — the marketers of Keto Gummies. This week, I’d like to look at why this market exists, and why someone should do something about it.

First of all, let’s understand what we mean by “predatory.” In biological terms, predation is a zero-sum game. For a predator to win, someone has to lose.  On Wikipedia, it’s phrased a little differently: “Predatory marketing campaigns may (also) rely on false or misleading messaging to coerce individuals into asymmetrical transactions. “

 “Asymmetrical” means the winner is the predator, the loser is the prey.

In the example of the gummy market, there are three winners — predators — and three losers, or prey. The winners are the marketers who are selling the gummies, the publishers who are receiving the ad revenue and the supply side platform that mediates the marketplace and take its cut.

The losers — in ascending order of loss — are the users of the games who must suffer through these crappy ads, the celebrities who have had their names and images illegally co-opted by the marketer, and the consumers who are duped into actually buying a bottle of these gummies.

You might argue the order of the last two, depending on what value you put on the brand of the celebrity. But in terms of sheer financial loss, consumer fraud is a significant issue, and one that gets worse every year.  In February, the Federal Trade Commission reported that U.S. consumers lost $8.8 billion to scams last year, many of which occurred online. The volume of scams is up 30% over 2021, and is 70% higher than it was in 2020.

So it’s not hard to see why this market is predatory. But is it fraudulent? Let’s apply a legal litmus test. Fraud is generally defined as “any form of dishonest or deceptive behavior that is intended to result in financial or personal gain for the fraudster, and does harm to the victim.”

Based on this, fraud does seem to apply. So why doesn’t anyone do anything?

For one, we’re talking about a lot of potential money here. Statista pegs the in-game ad market at $32.5 billion worldwide in 2023, with projected annual growth rate of 9.10% That kind of money provides a powerful incentive to publishers and supply-side platforms (SSPs) to look the other way.

I think it’s unreasonable expect the marketers of the gummies to police themselves. They have gone to great pains to move themselves away from the threat of legal litigation. These corporations are generally registered in jurisdictions like China or Cyprus, where legal enforcement of copyright or consumer protections are nonexistent. If someone like Oprah Winfrey has been unable to legally shut down the fraudulent use of her image and brand for two years, you can bet the average consumer who has been ripped off has no recourse. 

But perhaps one of the winners in this fraudulent ecosystem — the SSPs – should consider cracking down on this practice.

In nature, predators are kept in check by something called a predator-prey relationship. If predators become too successful, they eliminate their prey and seal their own doom. But this relationship only works if there are no new sources of prey. If we’re talking about an ecosystem that constantly introduces new prey, nothing keeps predators in check.

Let’s look at the incentive for the game publishers to police the predators. True, allowing fraudulent ads does no favours for the users of their game. A largescale study by Gao, Zeng, Lu et al found that bad ads lead to a bad user experience.

But do game publishers really care? There is no real user loyalty to games, so churn and burn seems to be the standard operating procedure. This creates an environment particularly conducive to predators.

So what about the SSPs?

GeoEdge, an ad security solution that guards against malvertising, among other things, has just released its Q1 Ad Quality Report. In an interview, Yuval Shiboli, the company’s director of product market, said that while malicious ads are common across all channels, in-game advertising is particularly bad because of a lack of active policing: “The fraudsters are very selective in who they show their malicious ads, looking for users who are scam-worthy, meaning there is no security detection software in the environment.”

Quality of advertising is usually directly correlated with the pricing of the ad inventory. The cheaper the ad, the poorer the quality. In-game ads are relatively cheap, giving fraudulent predators an easy environment to thrive in. And this entire environment is created by the SSPs.

According to Shiboli, it’s a little surprising to learn who are the biggest culprits on the SSP side: “Everybody on both the sell side and buy side works with Google, and everyone assumes that its platforms are clean and safe. We’ve found the opposite is true, and that of all the SSP providers, Google is the least motivated to block bad ads.”

By allowing — even encouraging — a predatory marketplace to exist, Google and other SSPs are doing nothing less than aiding and abetting criminals. In the short term, this may add incrementally to their profits, but at what long-term price?

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)