Talking Out Loud to Myself

I talk to myself out loud. Yes, full conversations, questions and answers, even debates — I can do everything all by myself.

I don’t do it when people are around. I’m just not that confident in my own cognitive quirks. It doesn’t seem, well… normal, you know?

But between you and me, I do it all the time. I usually walk at the same time. For me, nothing works better than some walking and talking with myself to work out particularly thorny problems.

Now, if I was using Google to diagnose myself, it would be a coin toss whether I was crazy or a genius. It could go either way.  One of the sites I clicked to said it could be a symptom of psychosis. But another site pointed to a study at Bangor University (2012 – Kirkham, Breeze, Mari-Beffa) that indicates that talking to yourself out loud may indicate a higher level of intelligence. Apparently, Nikola Tesla talked to himself during lightning storms. Of course, he also had a severe aversion to women who wore pearl earrings. So the jury may still be out on that one.

I think pushing your inner voice through the language processing center of your brain and actually talking out loud does something to crystallize fleeting thoughts. One of the researchers of the Bangor study, Paloma Mari-Beffa, agrees with this hypothesis:

“Our results demonstrated that, even if we talk to ourselves to gain control during challenging tasks, performance substantially improves when we do it out loud.”

Mari-Beffa continues,

“Talking out loud, when the mind is not wandering, could actually be a sign of high cognitive functioning. Rather than being mentally ill, it can make you intellectually more competent. The stereotype of the mad scientist talking to themselves, lost in their own inner world, might reflect the reality of a genius who uses all the means at their disposal to increase their brain power.”

When I looked for any academic studies to support the value of talking out loud to yourself, I found one (Huang, Carr and Cao, 2001) that was obviously aimed at neuroscientists, something I definitely am not. But after plowing through it, I think it said the brain does work differently when you say things out loud.

Another one (Gruber, von Cramon 2001) even said that when we artificially suppress our strategy of verbalizing our thoughts, our brains seem to operate the same way that a monkey’s brain would, using different parts of the brain to complete different tasks (e.g., visual, spatial or auditory). But when allowed to talk to themselves, humans tend to use a verbalizing strategy to accomplish all kinds of tasks. This indicates that verbalization seems to be the preferred way humans work stuff out. It gives guide rails and a road map to our human brain.

But if we’ve learned anything about human brains, we’ve learned that they don’t all work the same way. Are some brains more likely to benefit from the owner talking to themselves out loud, for instance? Take introverts, for example. I am a self-confessed introvert. And I talk to myself. So I had to ask, are introverts more likely to have deep, meaningful conversations with themselves?

If you’re not an introvert, let me first tell you that introverts are generally terrible at small talk. But — if I do say so myself — we’re great at “big” talk. We like to go deep in our conversations, generally with just one other person. Walking and talking with someone is an introvert’s idea of a good time. So walking and talking with yourself should be the introvert’s holy grail.

While I couldn’t find any empirical evidence to support this correlation between self-talk and introversion, I did find a bucketful of sites about introverts noting that it’s pretty common for us to talk to ourselves. We are inclined to process information internally before we engage externally, so self-talk becomes an important tool in helping us to organize our thoughts.

Remember, external engagements tend to drain the battery of an introvert, so a little power management before the engagement to prevent running out of juice midway through a social occasion makes sense.

I know this is all a lot to think about. Maybe it would help to talk it out — by yourself.

Feature image by Brecht Bug – Flickr – Creative Commons

You Know What Government Agencies Need? Some AI

A few items on my recent to-do list  have necessitated dealing with multiple levels of governmental bureaucracy: regional, provincial (this being in Canada) and federal. All three experiences were, without exception, a complete pain in the ass. So, having spent a good part of my life advising companies on how to improve their customer experience, the question that kept bubbling up in my brain was, “Why the hell is dealing with government such a horrendous experience?”

Anecdotally, I know everyone I know feels the same way. But what about everyone I don’t know? Do they also feel that the experience of dealing with a government agency is on par with having a root canal or colonoscopy?

According to a survey conducted last year by the research firm Qualtrics XM, the answer appears to be yes. This report paints a pretty grim picture. Satisfaction with government services ranked dead last when compared to private sector industries.

The next question, being that AI is all I seem to have been writing about lately, is this: “Could AI make dealing with the government a little less awful?”

And before you say it, yes, I realize I recently took a swipe at the AI-empowered customer service used by my local telco. But when the bar is set as low as it is for government customer service, I have to believe that even with the limitations of artificially intelligent customer service as it currently exists, it would still be a step forward. At least the word “intelligent” is in there somewhere.

But before I dive into ways to potentially solve the problem, we should spend a little time exploring the root causes of crappy customer service in government.

First of all, government has no competitors. That means there are no market forces driving improvement. If I have to get a building permit or renew my driver’s license, I have one option available. I can’t go down the street and deal with “Government Agency B.”

Secondly, in private enterprise, the maxim is that the customer is always right. This is, of course, bullshit.  The real truth is that profit is always right, but with customers and profitability so inextricably linked, things generally work out pretty well for the customer.

The same is not true when dealing with the government. Their job is to make sure things are (supposedly) fair and equitable for all constituents. And the determination of fairness needs to follow a universally understood protocol. The result of this is that government agencies are relentlessly regulation bound and fixated on policies and process, even if those are hopelessly archaic. Part of this is to make sure that the rules are followed, but let’s face it, the bigger motivator here is to make sure all bureaucratic asses are covered.

Finally, there is a weird hierarchy that exists in government agencies.  Frontline people tend to stay in place even if governments change. But the same is often not true for their senior management. Those tend to shift as governments come and go. According to the Qualtrics study cited earlier, less than half (48%) of government employees feel their leadership is responsive to feedback from employees. About the same number (47%) feel that senior leadership values diverse perspectives.

This creates a workplace where most of the people dealing with clients feel unheard, disempowered and frustrated. This frustration can’t help but seep across the counter separating them from the people they’re trying to help.

I think all these things are givens and are unlikely to change in my lifetime. Still, perhaps AI could be used to help us navigate the serpentine landscape of government rules and regulations.

Let me give you one example from my own experience. I have to move a retaining wall that happens to front on a lake. In Canada, almost all lake foreshores are Crown land, which means you need to deal with the government to access them.

I have now been bouncing back and forth between three provincial ministries for almost two years to try to get a permit to do the work. In that time, I have lost count of how many people I’ve had to deal with. Just last week, someone sent me a couple of user guides that “I should refer to” in order to help push the process forward. One of them is 29 pages long. The other is 42 pages. They are both about as compelling and easy to understand as you would imagine a government document would be. After a quick glance, I figured out that only two of the 71 combined pages are relevant to me.

As I worked my way through them, I thought, “surely some kind of ChatGPT interface would make this easier, digging through the reams of regulation to surface the answers I was looking for. Perhaps it could even guide you through the application process.”

Let me tell you, it takes a lot to make me long for an AI-powered interface. But apparently, dealing with any level of government is enough to push me over the edge.

Dove’s Takedown Of AI: Brilliant But Troubling Brand Marketing

The Dove brand has just placed a substantial stake in the battleground over the use of AI in media. In a campaign called “Keep Beauty Real”, the brand released a 2-minute video showing how AI can create an unattainable and highly biased (read “white”) view of what beauty is.

If we’re talking branding strategy, this campaign in a master class. It’s totally on-brand with Dove, who introduced its “Campaign for Real Beauty” 18 years ago. Since then, the company has consistently fought digital manipulation of advertising images, promoted positive body image and reminded us that beauty can come in all shapes, sizes and colors. The video itself is brilliant. You really should take a couple minutes to see it if you haven’t already.

But what I found just as interesting is that Dove chose to use AI as a brand differentiator. The video starts with by telling us, “By 2025, artificial intelligence is predicted to generate 90% of online content” It wraps up with a promise: “Dove will never use AI to create or distort women’s images.”

This makes complete sense for Dove. It aligns perfectly with its brand. But it can only work because AI now has what psychologists call emotional valency. And that has a number of interesting implications for our future relationship with AI.

“Hot Button” Branding

Emotional valency is just a fancy way of saying that a thing means something to someone. The valence can be positive or negative. The term valence comes from the German word valenz, which means to bind. So, if something has valency, it’s carrying emotional baggage, either good or bad.

This is important because emotions allow us to — in the words of Nobel laureate Daniel Kahneman — “think fast.” We make decisions without really thinking about them at all. It is the opposite of rational and objective thinking, or what Kahneman calls “thinking slow.”

Brands are all about emotional valency. The whole point of branding is to create a positive valence attached to a brand. Marketers don’t want consumers to think. They just want them to feel something positive when they hear or see the brand.

So for Dove to pick AI as an emotional hot button to attach to its brand, it must believe that the negative valence of AI will add to the positive valence of the Dove brand. That’s how branding mathematics sometimes work: a negative added to a positive may not equal zero, but may equal 2 — or more. Dove is gambling that with its target audience, the math will work as intended.

I have nothing against Dove, as I think the points it raises about AI are valid — but here’s the issue I have with using AI as a brand reference point: It reduces a very complex issue to a knee-jerk reaction. We need to be thinking more about AI, not less. The consumer marketplace is not the right place to have a debate on AI. It will become an emotional pissing match, not an intellectually informed analysis. And to explain why I feel this way, I’ll use another example: GMOs.

How Do You Feel About GMOs?

If you walk down the produce or meat aisle of any grocery store, I guarantee you’re going to see a “GMO-Free” label. You’ll probably see several. This is another example of squeezing a complex issue into an emotional hot button in order to sell more stuff.

As soon as I mentioned GMO, you had a reaction to it, and it was probably negative. But how much do you really know about GMO foods? Did you know that GMO stands for “genetically modified organisms”? I didn’t, until I just looked it up now. Did you know that you almost certainly eat foods that contain GMOs, even if you try to avoid them? If you eat anything with sugar harvested from sugar beets, you’re eating GMOs. And over 90% of all canola, corn and soybeans items are GMOs.

Further, did you know that genetic modifications make plants more resistance to disease, more stable for storage and more likely to grow in marginal agricultural areas? If it wasn’t for GMOs, a significant portion of the world’s population would have starved by now. A 2022 study suggests that GMO foods could even slow climate change by reducing greenhouse gases.

If you do your research on GMOs — if you “think slow’ about them — you’ll realize that there is a lot to think about, both good and bad. For all the positives I mentioned before, there are at least an equal number of troubling things about GMOs. There is no easy answer to the question, “Are GMOs good or bad?”

But by bringing GMOs into the consumer world, marketers have shut that down that debate. They are telling you, “GMOs are bad. And even though you consume GMOs by the shovelful without even realizing it, we’re going to slap some GMO-free labels on things so you will buy them and feel good about saving yourself and the planet.”

AI appears to be headed down the same path. And if GMOs are complex, AI is exponentially more so. Yes, there are things about AI we should be concerned about. But there are also things we should be excited about. AI will be instrumental in tackling the many issues we currently face.

I can’t help worrying when complex issues like AI and GMOs are broad-stroked by the same brush, especially when that brush is in the hands of a marketer.

Feature image: Body Scan 002 by Ignotus the Mage, used under CC BY-NC-SA 2.0 / Unmodified

We SHOULD Know Better — But We Don’t

“The human mind is both brilliant and pathetic.  Humans have built hugely complex societies and technologies, but most of us don’t even know how a toilet works.”

– from The Knowledge Illusion: Why We Never Think Alone” by Steven Sloman and Philip Fernback.

Most of us think we know more than we do — especially about things we really know nothing about. This phenomenon is called the Dunning-Kruger Effect. Named after psychologists Justin Kruger and David Dunning, this bias causes us to overestimate our ability to do things that we’re not very good at.

That’s the basis of the new book “The Knowledge Illusion: Why We Never Think Alone.” The basic premise is this: We all think we know more than we actually do. Individually, we are all “error prone, sometimes irrational and often ignorant.” But put a bunch of us together and we can do great things. We were built to operate in groups. We are, by nature, herding animals.

This basic human nature was in the back of mind when I was listening to an interview with Es Devlin on CBC Radio. Devlin is self-described as an artist and stage designer.  She was the vision behind Beyonce’s Renaissance Tour, U2’s current run at The Sphere in Las Vegas, and the 2022 Superbowl halftime show with Dr. Dre, Snoop Dogg, Eminem and Mary J. Blige.

When it comes to designing a visually spectacular experience,  Devlin has every right to be a little cocky. But even she admits that every good idea doesn’t come directly from her. She said the following in the interview (it’s profound, so I’m quoting it at length):

“I learned quite quickly in my practice to not block other people’s ideas — to learn that, actually,  other people’s ideas are more interesting than my own, and that I will expand by absorbing someone else’s idea.

“The real test is when someone proposes something in a collaboration that you absolutely, [in] every atom of your body. revile against. They say, ‘Why don’t we do it in bubblegum pink?’ and it was the opposite of what you had in mind. It was the absolute opposite of anything you would dream of doing.

“But instead of saying, ‘Oh, we’re not doing that,’  you say ‘OK,’ and you try to imagine it. And then normally what will happen is that you can go through the veil of the pink bubblegum suggestion, and you will come out with a new thing that you would never have thought of on your own.

“Why? Because your own little batch of poems, your own little backpack of experience. does not converge with that other person, so you are properly meeting not just another human being, but everything that led up to them being in that room with you. “

From Interview with Tom Powers on Q – CBC Radio, March 18, 2024

We live in a culture that puts the individual on a pedestal.  When it comes to individualistic societies, none are more so than the United States (according to a study by Hofstede Insights).  Protection of personal rights and freedom are the cornerstone of our society (I am Canadian, but we’re not far behind on this world ranking of individualistic societies). The same is true in the U.K. (where Devlin is from), Australia, the Netherlands and New Zealand.

There are good things that come with this, but unfortunately it also sets us up as the perfect targets for the Dunning-Kruger effect. This individualism and the cognitive bias that comes with it are reinforced by social media. We all feel we have the right to be heard — and now we have the platforms that enable it.

With each post, our unshakable belief in our own genius and infallibility is bulwarked by a chorus of likes from a sycophantic choir who are jamming their fingers down on the like button. Where we should be cynical of our own intelligence and knowledge, especially about things we know nothing about, we are instead lulled into hiding behind dangerous ignorance.

What Devlin has to say is important. We need to be mindful of our own limitations and be willing to ride on the shoulders of others so we can see, know and do more. We need to peek into the backpack of others to see what they might have gathered on their own journey.

(Feature Image – Creative Commons – https://www.flickr.com/photos/tedconference/46725246075/)

The Messaging of Climate Change

86% of the world believes that climate change is a real thing. That’s the finding of a massive new mega study with hundreds of authors (the paper’s author acknowledgement is a page and a half). 60,000 participants from 63 countries around the world took part. And, as I said, 86% of them believe in climate change.

Frankly, there’s no surprise there. You just have to look out your window to see it. Here in my corner of the world, wildfires wiped out hundreds of homes last summer and just a few weeks ago, a weird winter whiplash took temperatures from unseasonably warm to deep freeze cold literally overnight. This anomaly wiped out this region’s wine industry. The only thing surprising I find about the 86 percent stat is that 14% still don’t believe. That speaks of a determined type of ignorance.

What is interesting about this study is that it was conducted by behavioral scientists. This is an area that has always fascinated me. From the time I read Richard Thaler and Cass Sunstein’s book, Nudge, I have always been interested in behavioral interventions. What are the most effective “nudges” in getting people to shift their behaviors to more socially acceptable directions?

According to this study, that may not be that easy. When I first dove into this study, my intention was to look at how different messages had different impacts depending on the audience: right wing vs left wing for instance. But in going through the results, what struck me the most was just how poorly all the suggested interventions performed. It didn’t matter if you were liberal or conservative or lived in Italy or Iceland. More often than not, all the messaging fell on deaf ears.

What the study did find is that how you craft your campaign about climate change depends on what you want people to do. Do you want to shift non-believers in Climate Change towards being believers? Then decrease the psychological distance. More simply put, bring the dangers of climate change to their front doorstep. If you live next to a lot of trees, talk about wildfires. If you live on the coast, talk about flooding. If you live in a rural area, talk about the impacts of drought. But it should be noted that we weren’t talking a massive shift here – with an “absolute effect size of 2.3%”. It was the winner by the sheer virtue of sucking the least.

If you want to build support for legislation that mitigates climate change, the best intervention was to encourage people to write a letter to a child that’s close to you, with the intention that they read it in the future. This forces the writer to put some psychological skin in the game.  

Who could write a future letter to someone you care about without making some kind of pledge to make sure there’s still a world they can live in? And once you do that, you feel obligated to follow through. Once again, this had a minimal impact on behaviors, with an overall effect size of 2.6%.

A year and a half ago, I talked about Climate Change messaging, debating Mediapost Editor-in-Chief Joe Mandese about whether a doom and gloom approach would move the needle on behaviors. In a commentary from the summer of 2022, Mandese wrapped up by saying, “What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

In a follow up, I worried that doom and gloom might backfire on us, “Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.”

So, what does this study say?

The answer, again, is, “it depends.” If we’re talking about getting people to share posts on social media, then Doom and Gloom is the way to go. Of all the various messaging options, this had the biggest impact on sharing, by a notable margin.

This isn’t really surprising. A number of studies have shown that negative news is more likely to be shared on social media than positive news.

But what if we’re asking people to make a change that requires some effort beyond clicking the “share” button? What if they actually have to do something? Then, as I suspected, Doom and Gloom messaging had the opposite effect, decreasing the likelihood that people would make a behavioral change to address climate change (the study used a tree planting initiative as an example). In fact, when asking participants to actually change their behavior in an effortful way, all the tested climate interventions either had no effect or, worse, they “depress(ed) and demoralize(d) the public into inaction”.

That’s not good news. It seems that no matter what the message is, or who the messenger is, we’re likely to shoot them if they’re asking us to do anything beyond bury our head in the sand.

What’s even worse, we may be losing ground. A study from 10 years ago by Yale University had more encouraging results. They showed that effective climate change messaging, was able to shift public perceptions by up to 19 percent. While not nearly as detailed as this study, the results seem to indicate a backslide in the effectiveness of climate messaging.

One of the commentators that covered the new worldwide study perhaps summed it up best by saying, “if we’re dealing with what is probably the biggest crisis ever in the history of humanity, it would help if we actually could talk about it.”

Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

Fooling Some of the Systems Some of the Time

If there’s a system, there’s a way to game it. Especially when those systems are tied to someone making money.

Buying a Best Seller

Take publishing, for instance. New books that say they are on the New York Times Best-Seller List sell more copies than ones that don’t make the list. A 2004 study by University of Wisconsin economics professor Alan Sorenson found the bump is about 57%. That’s; certainly motivation for a publisher to game the system.

There’s also another motivating factor. According to a Times op-ed, Michael Korda, former editor in chief of Simon and Schuster, said that an author’s contract can include a bonus of up to $100,000 for hitting No. 1 on the list.

This amplifying effect is not a one-shot deal. Make the list for just one week, in any slot under any category, and you can forever call yourself a “NY Times bestselling author,” reaping the additional sales that that honor brings with it. Given the potential rewards, you can guarantee that someone is going to be gaming the system.

And how do you do that? Typically, by doing a bulk purchase through an outlet that feeds its sales numbers to TheTimes. That’s what Donald Trump Jr. and his publisher did for   his book “Triggered,” which hit No. 1 on its release in November of 2019, according to various reports.  Just before the release, the Republican National Committee reportedly placed a $94,800 order with a bookseller, which would equate to about 4,000 books, enough to ensure that “Triggered” would end up on the Times list. (Note: The Times does flag these suspicious entries with a dagger symbol when it believes that someone may be potentially gaming the system by buying in bulk.)

But it’s not only book sales where you’ll find a system primed for rigging. Even those supposedly objective 5-star buyer ratings you find everywhere have also been gamed.

5-Star Scams

A 2021 McKinsey report said that, depending on the category, a small bump in a star rating on Amazon can translate into a 30% to 200% boost in sales. Given that potential windfall, it’s no surprise that you’ll find fake review scams proliferate on the gargantuan retail platform.

A recent Wired exposé on these fake reviews found a network that had achieved a level of sophistication that was sobering. It included active recruitment of human reviewers (called “Jennies” — if you haven’t been recruited yet, you’re a “Virgin Jenny”) willing to write a fake review for a small payment or free products. These recruitment networks include recruiting agents in locations including Pakistan, Bangladesh and India working for sellers from China.

But the fake review ecosystem also included reviews cranked out by AI-powered automated agents. As AI improves, these types of reviews will be harder to spot and weed out of the system.

Some recent studies have found that, depending on the category, over one-third of the reviews you see on Amazon are fake. Books, baby products and large appliance categories are the worst offenders.

Berating Ratings…

Back in 2014, Itamar Simonson and Emanuel Rosen wrote a book called “Absolute Value: What Really Influences Customers in the Age of (Nearly) Perfect Information.” Spoiler alert: they posited that consumer reviews and other sources of objective information were replacing traditional marketing and branding in terms of what influenced buyers.

They were right. The stats I cited above show how powerful these supposedly objective factors can be in driving sales. But unfortunately, thanks to the inevitable attempts to game these systems, the information they provide can often be far from perfect.

A Look Back at 2023 from the Inside.

(Note: This refers to the regular feature on Mediapost – The Media Insider – which I write for every Tuesday)

It seems that every two years, I look back at what the Media Insiders were musing about over the past year. The ironic part is that I’m not an Insider. I haven’t been “inside” the Media industry for over a decade. Maybe that affords me just enough distance to be what I hope could be called an “informed observer.”

I first did this in 2019, and then again in 2021. This year, I decided to grab a back of an envelope (literally) and redo this far from scientific poll. Categorization of themes is always a challenge when I do this, but there are definitely some themes that have been consistent across the past 5 years.  I have tremendous respect for my fellow Insiders and I always find it enlightening to learn what was on their minds.

In 2019, the top three things we were thinking about were (in order): disruptions in the advertising business, how technology is changing us and how politics changed social media.

In 2021, the top three topics included (again) how technology was changing us, general marketing advice and the toxic impact of social media.

So, what about 2023? What were we writing about? After eliminating the columns that were reruns, I ended up with 230 posts in the past year.

It probably comes as a surprise to no one that artificial intelligence was the number one topic by a substantial margin. Almost 15% of all our Insider posts talked about the rise of AI and its impact on – well – pretty much everything!

The number two topic – at 12% – was TV, video and movies. Most of the posts touched on how this industry is going through ongoing disruption in every aspect – creation, distribution, buying and measurement.

Coming in at number three, at just under 12%, was social media. Like in the previous years, most of the posts were about the toxic nature of social media, but there was a smattering of positive case studies about how social platforms were used for positive change.

We Insiders have always been an existential bunch and last year was no different. Our number four topic was about our struggling to stay human in a world increasingly dominated by tech. This accounted for almost 11% of all our posts.

The next two most popular topics were both firmly grounded in the marketing industry itself. Posts about how to be a better marketer generated almost 9% of Insider content for 2023 and various articles about the business of tech marketing added another 8% of posts.

Continuing down the list, we have world events and politics (Dave Morgan’s columns about the Ukraine were a notable addition to this topic), examples of marketing gone wrong and the art and science of brand building.

We also looked at the phenomenon of fame and celebrity, sustainability, and the state of the News industry. In what might have been a wistful look back at what we remember as simpler times, there were even a few columns about retro-media, including the resurgence of the LP.

Interestingly, former hot topics like performance measurement, data and search all clustered near the bottom of the list in terms of number of posts covering these topics.

With 2023 in our rear view mirror, what are the takeaways? What can we glean from the collected year-long works of these very savvy and somewhat battle-weary veterans of marketing?

Well, the word “straddle” comes to mind. We all seem to have one foot still planted in the world and industry we thought we knew and one tentatively dipping its toes into the murky waters of what might come. You can tell that the Media Insiders are no less passionate about the various forms of media we write about, but we do go forward with the caution that comes from having been there and done that.

I think that, in total, I found a potentially worrying duality in this review of our writing. Give or take a few years –  all my fellow Insiders are of the same generation. But we are not your typical Gen-Xers/Baby Boomers (or, in my case, caught in the middle as a member of Generation Jones). We have worked with technology all our lives. We get it. The difference is, we have also accumulated several decades of life wisdom. We are past the point where we’re mesmerized by bright shiny objects. I think this gives us a unique perspective. And, based on what I read, we’re more than a little worried about what future might bring.

Take that for what it’s worth.

OpenAI’s Q* – Why Should We Care?

OpenAI founder Sam Altman’s ouster and reinstatement has rolled through the typical news cycle and we’re now back to blissful ignorance. But I think this will be one of the sea-change moments; a tipping point that we’ll look back on in the future when AI has changed everything we thought we knew and we wonder, “how the hell did we let that happen?”

Sometimes I think that tech companies use acronyms and cryptic names for new technologies to allow them to sneak game changers in without setting off the alarm bells. Take OpenAI for example. How scary does Q-Star sound? It’s just one more vague label for something we really don’t understand.

 If I’m right, we do have to ask the question, “Who is keeping an eye on these things?”

This week I decided to dig into the whole Sam Altman firing/hiring episode a little more closely so I could understand if there’s anything I should be paying attention to. Granted, I know almost nothing about AI, so what follows if very much at the layperson level, but I think that’s probably true for the vast majority of us. I don’t run into AI engineers that often in my life.

So, should we care about what happened a few weeks ago at OpenAI? In a word – YES.

First of all, a little bit about the dynamics of what led to Altman’s original dismissal. OpenAI started with the best of altruistic intentions, to “to ensure that artificial general intelligence benefits all of humanity.”  That was an ideal – many would say a naïve ideal – that Altman and OpenAI’s founders imposed on themselves. As Google discovered with its “Don’t Be Evil” mantra, it’s really hard to be successful and idealistic at the same time. In our world, success is determined by profits, and idealism and profitability almost never play in the same sandbox. Google quietly watered the “Don’t be Evil” motto until it virtually disappeared in 2018.

OpenAI’s non-profit board was set up as a kind of Internal “kill switch” to prevent the development of technologies that could be dangerous to the human race. That theoretical structure was put to the test when the board received a letter this year from some senior researchers at the company warning of a new artificial intelligence discovery that might take AI past the threshold where it could be harmful to humans. The board then did was it was set up to do, firing Altman and board chairman Greg Brockman and putting the brakes on the potentially dangerous technology. Then, Big Brother Microsoft (who has invested $13 billion in OpenAI) stepped in and suddenly Altman was back. (Note – for a far more thorough and fascinating look at OpenAI’s unique structure and the endemic problems with it, read through Alberto Romero’s series of thoughtful posts.)

There were probably two things behind Altman’s ouster: the potential capabilities of a new development called Q-Star and a fear that it would follow OpenAI’s previous path of throwing it out there to the world, without considering potential consequences. So, why is Q-Star so troubling?

Q-Star could be a major step closer to AI which can rationalize and plan. This moves us closer to the overall goal of artificial general Intelligence (AGI), the holy grail for every AI developer, including OpenAI. Artificial general intelligence, as per OpenAI’s own definition, are “AI systems that are generally smarter than humans.” Q-Star, through its ability to tackle grade school math problems, showed the promise of being artificial intelligence that could plan and reason. And that is an important tipping point, because something that can rationalize and plan pushes us forever past the boundary of a tool under human control. It’s technology that thinks for itself.

Why should this worry us? It should worry us because of Herbert Simon’s concept of “bounded rationality”, which explains that we humans are incapable of pure rationality. At some point we stop thinking endlessly about a question and come up with an answer that’s “good enough”. And we do this because of limited processing power. Emotions take over and make the decision for us.

But AGI throws those limits away. It can process exponentially more data at a rate we can’t possibly match. If we’re looking at AI through Sam Altman’s rose-colored glasses, that should be a benefit. Wouldn’t it be better to have decisions made rationally, rather than emotionally? Shouldn’t that be a benefit to mankind?

But here’s the rub. Compassion is an emotion. Empathy is an emotion. Love is also an emotion. What kind of decisions do we come to if we strip that out of the algorithm, along with any type of human check and balance?

Here’s an example. Let’s say that at some point in the future an AGI superbrain is asked the question, “Is the presence of humans beneficial to the general well-being of the earth?”

I think you know what the rational answer to that is.

When AI Love Goes Bad

When we think about AI and its implications, it’s hard to wrap our own non-digital, built of flesh and blood brains around the magnitude of it. Try as we might, it’s impossible to forecast the impact of this massive wave of disruption that’s bearing down on us. So, today, in order to see what might be the unintended consequences, I’d like to zoom in to one particular example.

There is a new app out there. It’s called Anima and it’s an AI girlfriend. It’s not the only one. When it comes to potential virtual partners, there are plenty of fish in the sea. But – for this post, let’s stay true to Anima. Here’s the marketing blurb on her website: “The most advanced romance chatbot you’ve ever talked to. Fun and flirty dating simulator with no strings attached. Engage in a friendly chat, roleplay, grow your love & relationship skills.”

Now, if there’s one area where our instincts should kick in and alarm bells should start going off about AI, it should be in the area of sexual attraction. If there was one human activity that seems bound by necessity to being ITRW (in the real world) it should be this one.

If we start to imagine what might happen when we turn to AI for love, we could ask filmmaker Spike Jonze. He already imagined it, 10 years ago when he wrote the screenplay for “her”, the movie with Joaquin Phoenix. Phoenix plays Theodore Twombly, a soon-to-be divorced man who upgrades his computer to a new OS, only to fall in love with the virtual assistant (voiced by Scarlett Johansson) that comes as part of the upgrade.

Predictably, complications ensue.

To get back to Anima, I’m always amused by the marketing language developers use to lull us into the acceptance of things we should be panicking about. In this case, it was two lines: “No strings attached” and “grow your love and relationship skills.”

First, about that “no strings attached” thing – I have been married for 34 years now and I’m here to tell you that relationships are all about “strings.” Those “strings” can also be called by other names: empathy, consideration, respect, compassion and – yes – love. Is it easy to keep those strings attached – to stay connected with the person at the other end of those strings? Hell, no! It is a constant, daunting, challenging work in progress. But the alternative is cutting those strings and being alone. Really alone.

If we get the illusion of a real relationships through some flirty version of ChatGPT, will it be easier to cut the strings that keep us connected to other real people out there? Will we be fooled into thinking something is real when it’s just a seductive algorithm?  In “her”, Jonze brings Twombly back to the real world, ending with a promise of a relationship with a real person as they both gaze at the sunset. But I worry that that’s just a Hollywood ending. I think many people – maybe most people – would rather stick with the “no strings attached” illusion. It’s just easier.

And will AI adultery really “grow your love and relationship skills?” No. No more than you will grow your ability to determine accurate and reliable information by scrolling through your Facebook feed. That’s just a qualifier that the developer threw in so they didn’t feel crappy about leading their customers down the path to “AI-rmegeddon”.

Even if we put all this other stuff aside for the moment, consider the vulnerable position we put ourselves in when we start mistaking robotic love for the real thing. All great cons rely on one of two things – either greed or love. When we think we’re in love, we drop our guard. We trust when we probably shouldn’t.

Take the Anima artificial girlfriend app for example. We know nothing about the makers of this app. We don’t know where the data collected goes. We certainly have no idea what their intentions are. Is this really who you want to start sharing your most intimate chit chat with? Even if their intentions are benign, this is an app built a for-profit company, which means there needs to be a revenue model in it somewhere. I’m guessing that all your personal data will be sold to the highest bidder.

You may think all this talk of AI love is simply stupid. We humans are too smart to be sucked in by an algorithm. But study after study has shown we’re not. We’re ready to make friends with a robot at the drop of a hat. And once we hit friendship, can love be far behind?