2024: A Media Insider Review

(This is my annual look back at what the MediaPost Media Insiders were talking about in the last year.)

Last year at this time I took a look back at what we Media Insiders had written about over the previous 12 months. Given that 2024 was such a tumultuous year, I thought it would be interesting to do it again and see if that was mirrored in our posts.

Spoiler alert: It was.

If MediaPost had such a thing as an elder’s council, the Media Insiders would be it. We have all been writing for MediaPost for a long, long  time. As I mentioned, my last post was my 1000th for MediaPost. Cory Treffiletti has actually surpassed my total, with 1,154 posts. Dave Morgan has written 700. Kaila Colbin has 586 posts to her credit. Steven Rosenbaum has penned 371, and Maarteen Albarda has 367. Collectively, that is well over 4,000 posts.

I believe we bring a unique perspective to the world of media and marketing and — I hope — a little gravitas. We have collectively been around several blocks numerous times and have been doing this pretty much as long as there has been a digital marketing industry. We have seen a lot of things come and go.  Given all that, it’s probably worth paying at least a little bit of attention to what is on our collective minds. So here, in a Media Insider meta analysis, is 2024 in review.

I tried to group our posts in four broad thematic buckets and tally up the posts that fell in each. Let’s do them in reverse order.

Media

Technically, we’re supposed to write on media, which, I admit, is a very vaguely defined category. It could probably be applied to almost everything we wrote, in one way or the other. But if we’re going to be sticklers about it, very few of our posts were actually about media. I only counted 12, the majority of these about TV or movies. There were a couple of posts about music as well.

If you define media as a “box,” we were definitely thinking outside of it.

It Takes a Village

This next category is more in the “Big Picture” category we Media Insiders seem to gravitate toward. It goes to how we humans define community, gather in groups and find our own places in the world. In 2024 we wrote 59 posts that I placed in this category.

Almost half of these posts looked at the role of markets in in our world and how the rules of engagement for consumers in those markets are evolving. We also looked at how we seek information, communicate with each other and process the world through our own eyes.

The Business of Marketing

All of us Media Insiders either are or were marketers, so it makes sense that marketing is still top of mind for us. We wrote 80 posts about the business of marketing. The three most popular topics were — in order — buying media, the evolving role of the agency, and marketing metrics. We also wrote about advertising technology platforms, branding and revenue models. Even my old wheelhouse of search was touched on a few times last year.

Existential Threats

The most popular topic was not surprising, given that it does reflect the troubled nature of the world we live in. Fully 40% of the posts we wrote — 99 in total — were about something that threatens our future as humans.

The number-one topic, as it was last year, was artificial intelligence. There is a caveat here. Not all the posts were about AI as a threat. Some looked at the potential benefits. But the vast majority of our posts were rather doomy and gloomy in their outlook.

While AI topped the list of things we wrote about in 2024, it was followed closely by two other topics that also gave us grief: the death knell of democracy, and the scourge of social media.

The angst about the decay of democracy is not surprising, given that the U.S. has just gone through a WTF election cycle. It’s also clear that we collectively feel that social media must be reined in. Not one of our 28 posts on social media had anything positive to say.

As if those three threats weren’t enough, we also touched briefly on climate change, the wars raging in Ukraine and the Middle East, and the disappearance of personal privacy.

Looking Forward

What about 2025? Will we be any more positive in the coming year? I doubt it. But it’s interesting to note that the three biggest worries we had last year were all monsters of our own making. AI, the erosion of democracy, and the toxic nature of social media all are things which are squarely within our purview. Even if these things are not created by media and marketing, they certainly share the same ecosystem. And, as I said in my 1000th post, if we built these things, we can also fix them.

The Political Brinkmanship of Spam

I am never a fan of spam. But this is particularly true when there is an upcoming election. The level of spam I have been wading through seems to have doubled lately. We just had a provincial election here in British Columbia and all parties pulled out all stops, which included, but was not limited to; email, social media posts, robotexts and robocalls.

In Canada and the US, political campaigns are not subject to phone and text spam control laws such as our Canadian Do Not Call List legislation. There seems to be a little more restriction on email spam. A report from Nationalsecuritynews.com this past May warned that Americans would be subjected to over 16 billion political robocalls. That is a ton of spam.

During this past campaign here in B.C., I noticed that I do not respond to all spam with equal abhorrence. Ironically, the spam channels with the loosest restrictions are the ones that frustrate me the most.

There are places – like email – where I expect spam. It’s part of the rules of engagement. But there are other places where spam sneaks through and seems a greater intrusion on me. In these channels, I tend to have a more visceral reaction to spam. I get both frustrated and angry when I have to respond to an unwanted text or phone call. But with email spam, I just filter and delete without feeling like I was duped.

Why don’t we deal with all spam – no matter the channel – the same? Why do some forms of spam make us more irritated than others? It’s almost like we’ve developed a spam algorithm that dictates how irritated we get when we deal with spam.

According to an article in Scientific American, the answer might be in how the brain marshalls its own resources.

When it comes to capacity, the brain is remarkably protective. It usually defaults to the most efficient path. It likes to glide on autopilot, relying on instinct, habit and beliefs. All these things use much less cognitive energy than deliberate thinking. That’s probably why “mindfulness” is the most often quoted but least often used meme in the world today.

The resource we’re working with here is attention. Limited by the capacity of our working memory, attention is a spotlight we must use sparingly. Our working memory is only capable of handling a few discrete pieces of information at a time. Recent research suggests the limit may be around 3 to 5 “chunks” of information, and that research was done on young adults. Like most things with our brains, the capacity probably diminishes with age. Therefore, the brain is very stingy with attention. 

I think spam that somehow gets past our first line of defence – the feeling that we’re in control of filtering – makes us angry. We have been tricked into paying attention to something that was unsuspected. It becomes a control issue. In an information environment where we feel we have more control, we probably have less of a visceral response to spam. This would be true for email, where a quick scan of the items in our inbox is probably enough to filter out the spam. The amount of attention that gets hijacked by spam is minimal.

But when spam launches a sneak attack and demands a swing of attention that is beyond our control, that’s a different matter. We operate with a different mental modality when we answer a phone or respond to a text. Unlike email, we expect those channels to be relatively spam-free, or at least they are until an election campaign comes around. We go in with our spam defences down and then our brain is tricked into spending energy to focus on spurious messaging.

How does the brain conserve energy? It uses emotions. We get irritated when something commandeers our attention. The more unexpected the diversion, the greater the irritation.  Conversely, there is the equivalent of junk food for the brain – input that requires almost no thought but turns on the dopamine tap and becomes addictive. Social media is notorious for this.

This battle for our attention has been escalating for the past two decades. As we try to protect ourselves from spam with more powerful filters, those that spread spam try to find new ways to get past those filters. The reason political messaging was exempt from spam control legislation was that democracies need a well-informed electorate and during election campaigns, political parties should be able to send out accurate information about their platforms and positions.

That was the theory, anyway.

Privacy’s Last Gasp

We’ve been sliding down the slippery slope of privacy rights for some time. But like everything else in the world, the rapid onslaught of disruption caused by AI is unfurling a massive red flag when it comes to any illusions we may have about our privacy.

We have been giving away a massive amount of our personal data for years now without really considering the consequences. If we do think about privacy, we do so as we hear about massive data breaches. Our concern typically is about our data falling into the hands of hackers and being used for criminal purposes.

But when you combine AI and data, a bigger concern should catch our attention. Even if we have been able to retain some degree of anonymity, this is no longer the case. Everything we do is now traceable back to us.

Major tech platforms generally deal with any privacy concerns with the same assurance: “Don’t worry, your data is anonymized!” But really, even anonymized data requires very few dots to be connected to relink the data back to your identity.

Here is an example from the Electronic Frontier Foundation. Let’s say there is data that includes your name, your ZIP or postal code, your gender and your birthdate. If you remove your name, but include those other identifiers, technically that data is now anonymized.

But, says the EEF:

  • First, think about the number of people that share your specific ZIP or postal code. 
  • Next, think about how many of those people also share your birthday. 
  • Now, think about how many people share your exact birthday, ZIP code, and gender. 

According to a study from Carnegie Mellon University, those three factors are all that’s needed to identify 87% of the US population. If we fold in AI and its ability to quickly crunch massively large data sets to identify patterns, that percentage effectively becomes 100% and the data horizon expands to include pretty much everything we say, post, do or think. We may not think so, but we are constantly in the digital data spotlight and it’s a good bet that somebody, somewhere is watching our supposedly anonymous activities.

The other shred of comfort we tend to cling to when we trade away our privacy is that at least the data is held by companies we are familiar with, such as Google and Facebook. But according to a recent survey by Merkle reported on in MediaPost by Ray Schultz, even that small comfort may be slipping from our grasp. Fifty eight percent of respondents said they were concerned about whether their data and privacy identity were being protected.

Let’s face it. If a platform is supported by advertising, then that platform will continue to develop tools to more effectively identify and target prospects. You can’t do that and also effectively protect privacy. The two things are diametrically opposed. The platforms are creating an ecosystem where it will become easier and easier to exploit individuals who thought they were protected by anonymity. And AI will exponentially accelerate the potential for that exploitation.

The platform’s failure to protect individuals is currently being investigated by the US Senate Judiciary Committee. The individuals in this case are children and the protection that has failed is against sexual exploitation. None of the platform executives giving testimony intended for this to happen. Mark Zuckerberg apologized to the parents at the hearing, saying, “”I’m sorry for everything you’ve all gone through. It’s terrible. No one should have to go through the things that your families have suffered.”

But this exploitation didn’t happen just because of one little crack in the system or because someone slipped up. It’s because Meta has intentionally and systematically been building a platform on which the data is collected and the audience is available that make this exploitation possible. It’s like a gun manufacturer standing up and saying, “I’m sorry. We never imagined our guns would be used to actually shoot people.”

The most important question is; do we care that our privacy has effectively been destroyed? Sure, when we’re asked in a survey if we’re worried, most of us say yes. But our actions say otherwise. Would we trade away the convenience and utility these platforms offer us in order to get our privacy back? Probably not. And all the platforms know that.

As I said at the beginning, our privacy has been sliding down a slippery slope for a long time now. And with AI now in the picture, it’s probably going down for the last time. There is really no more slope left to slide down.

The Challenge in Regulating AI

A few weeks ago, MediaPost’s Wendy Davis wrote a commentary on the Federal Trade Commission’s investigation of OpenAI. Of primary concern to the FTC was ChatGPT’s tendency to hallucinate. I found this out for myself when ChatGPT told some whoppers about who I was and what I’ve done in the past.

Davis wrote, “The inquiry comes as a growing chorus of voices — including lawmakers, consumer advocates and at least one business group — are pushing for regulations governing artificial intelligence. OpenAI has also been hit with lawsuits over copyright infringement, privacy and defamation.”

This highlights a problem with trying to legislate AI. First, the U.S. is using its existing laws and trying to apply them to a disruptive and unpredictable technology. Laws, by their nature, have to be specific, which means you have to be able to anticipate circumstances in which they’d be applied. But how do you create or apply laws for something unpredictable? All you can do is regulate what you know. When it comes to predicting the future, legislators tend to be a pretty unimaginative bunch. 

In the intro to a Legal Rebels podcast on the American Bar Association’s website, Victor Li included this quote, “At present, the regulation of AI in the United States is still in its early stages, and there is no comprehensive federal legislation dedicated solely to AI regulation. However, there are existing laws and regulations that touch upon certain aspects of AI, such as privacy, security and anti-discrimination. “

The ironic thing was, the quote came from ChatGPT. But in this case, ChatGPT got it mostly right. The FTC is trying to use the laws at its disposal to corral OpenAI by playing a game of legal whack-a-mole:  hammering things like privacy, intellectual property rights, defamation, deception and discrimination as they pop their heads up.

But that’s only addressing the problems the FTC can see. It’s like repainting the deck railings on the Titanic the day before it hit the iceberg. It’s not what you know that’s going to get you, it’s what you don’t know.

If you’re attacking ChatGPT’s tendency to fabricate reality, you’re probably tilting at the wrong windmill. This is a transitory bug. OpenAI benefits in no way from ChatGPT’s tendency to hallucinate. The company would much rather have a large language-based model that is usually truthful and accurate. You can bet they’re working on it. By the time the ponderous wheels of the U.S. legislative system get turned around and rolling in the right direction, chances are the bug will be fixed and there won’t really be anything to legislate against.

What we need before we start talking about legislation is something more fundamental. We need an established principle, a framework of understanding from which laws can be created as situations arise.

This is not the first time we’ve faced a technology that came packed with potential unintended consequences. In February, 1975, 140 people gathered at a conference center in Monterey, California to attempt to put a leash on genetic manipulation, particularly Recombinant DNA engineering.

This group, which included mainly biologists with a smattering of lawyers and physicians, established principle-based guidelines that took its name from the conference center where they met. It was called the Asilomar Conference agreement.

The guidelines were based on the level of risk involved in proposed experiments. The higher the risk, the greater the required precautions.

These guidelines were flexible enough to adapt as the science of genetic engineering evolved. It was one of the first applications of something called “the precautionary principle” – which is just what it sounds like: if the future is uncertain, go forward slowly and cautiously.

While the U.S. is late to the AI legislation party, the European Union has been taking the lead. And, if you look its first attempts at E.U. AI regulation drafted in 2021, you’ll see it has the precautionary principle written all over it. Like the Asilomar guidelines, there are different rules for different risk levels. While the U.S. attempts at legislation are mired in spotty specifics, the EU is establishing a universal framework that can adapt to the unexpected.

This is particularly important with AI, because it’s an entirely different ballgame than genetic engineering. Those driving the charge are for-profit companies, not scientists working in a lab.

OpenAI is intended as a platform that others will build on. It will move quickly, and new issues will pop up constantly. Unless the regulating bodies are incredibly nimble and quick to plug loopholes, they will constantly be playing catch-up.

It’s All in How You Spin It

I generally get about 100 PR pitches a week. And I’m just a guy who writes a post on tech, people and marketing now and then. I’m not a journalist. I’m not even gainfully employed by anyone. I am just one step removed — thanks to the platform  MediaPost has provided me — from “some guy” you might meet at your local coffee shop.

But still, I get 100 PR pitches a week. Desperation for coverage is the only reason I can think of for this to be so. 99.9999% of the time, they go straight to my trash basket. And the reason they do is that they’re almost never interesting. They are — well, they’re pitches for free exposure.

Now, the average pitch, even if it isn’t interesting, should at least try to match the target’s editorial interest. It should be in the strike zone, so to speak.

Let’s do a little postmortem on one I received recently. It was titled “AI in Banking.” Fair enough. I have written a few posts on AI. Specifically, I have written a few posts on my fear of AI.

I have also written about my concerns about misuse of data. When it comes to the nexus between AI and data, I would be considered more than a little pessimistic. So, something linking AI and banking did pique my interest, but not in a good way. I opened the email.

There, in the first paragraph, I read this: “AI is changing how banks provide personalized recommendations and insights based on enriched financial data offering tailored suggestions, such as optimizing spending, suggesting suitable investment opportunities, or identifying potential financial risks.”

This, for those of you not familiar with “PR-ese,” is what we in the biz call “spin.” Kellyanne Conway once called it — more euphemistically — an alternative fact.

Let me give you an example. Let’s say that during the Tour de France half the Peloton crashes and bicyclists get a nasty case of road rash. A PR person would spin that to say that “Hundreds of professional cyclists discover a new miracle instant exfoliation technique from the South of France.”

See? It’s not a lie, it’s just an alternative fact.

Let’s go on. The second paragraph of the pitch continued: “Bud, a company that specializes in data intelligence is working with major partners across the country (Goldman Sachs, HSBC, 1835i, etc.) to categorize and organize financial information and data so that users are empowered to make informed decisions and gain a deeper understanding of their financial situation.”

Ah — we’re now getting closer to the actual fact. The focus is beginning to switch from the user, empowered to make better financial decisions thanks to AI, to what is actually happening: a data marketplace being built on the backs of users for sale to corporate America.

Let’s now follow the link to Bud’s website. There, in big letters on the home page, you read:

“Turn transactional data into real-time underwriting intelligence

Bud’s AI platform and data visualizations help lenders evaluate risk, reduce losses and unlock hidden revenue potential.”

Bingo. This is not about users, at least, not beyond using them as grist in a data mill. This is about slipping a Trojan Horse into your smartphone in the form of an app and hoovering your personal data up to give big banks an intimate glimpse into not just your finances, but also your thinking about those finances. As you bare your monetary soul to this helpful “Bud,” you have established a direct pipeline to the very institutions that hold your future in their greedy little fingers. You’re giving an algorithm everything it needs to automatically deny you credit.

This was just one pitch that happened to catch my eye long enough to dig a little deeper. But it serves as a perfect illustration of why I don’t trust big data or AI in the hands of for-profit corporations.

And that will continue to be true — no matter how you PR pros spin it.

I Wish I Were Half the Person ChatGPT Thinks I Am

Yes, I’ve finally crawled aboard the journalist bandwagon about CHatGPT. The reason I’m waited so long is that I don’t really have any valid technical or professional perspective to add to the conversation. All I have is my own personal opinion.

So here it is, for what it’s worth.

For my first test, I wanted to ask ChatGPT about something where there would be a fair amount of information online but also something where I had the advantage of knowing the inside scoop, allowing me to judge the accuracy of its response. So I asked about the Golden Triangle – the outcome of eye tracking studies we did almost 20 years ago (I just had a minor internal meltdown when I did the mental math and realized that was two decades ago).

I have to admit, ChatGPT nailed it. It did an accurate and concise summary. I’m not sure I could have done better myself.

Google’s Golden Triangle, also known as the “Golden Triangle of Search,” refers to the top section of Google’s search engine results page (SERP) where the most relevant and highest-ranked search results are displayed. It is called the “golden triangle” because eye-tracking studies have shown that users tend to focus their attention on the top three results in the search results page, forming a triangular shape.

The concept of the Golden Triangle is important for search engine optimization (SEO) because it emphasizes the importance of ranking high in search results. Websites that appear in the Golden Triangle are more likely to receive clicks and traffic than those that appear lower on the page.

However, it’s worth noting that the layout of Google’s search engine results page has changed over the years, and the Golden Triangle may not be as relevant as it once was. Google now includes various types of results, such as featured snippets and knowledge panels, that may appear above or alongside the traditional organic search results. As a result, SEO strategies have evolved to include optimizing for these new types of results as well.

When I asked who discovered The Golden Triangle, ChatGPT accurately said it was a joint effort.

The concept of the Golden Triangle in relation to Google’s search engine results page (SERP) was not discovered by a single person, but rather through a combination of eye-tracking studies and data analysis conducted by various researchers and digital marketing professionals.

One of the earliest studies that examined how users interact with Google’s search results page was conducted by Enquiro Research in 2005. This study, titled “The Golden Triangle,” found that users tended to focus their attention on the top-left portion of the search results page, where the first few results were displayed.

Since then, other studies and analyses have confirmed the importance of the top section of the search results page, and the term “Golden Triangle” has become a widely used term in the SEO industry to describe this phenomenon.

For the record, the main players in identifying the pattern were Kevin Lee at DidIt, myself and my agency, Enquiro, EyeTools, the San Francisco eye-tracking consultants who conducted the first study and – in an independent study – Jakob Nielsen at the Nielsen Norman Group. It was Kevin and I who gave it the name The Golden Triangle, mere moments before we had to present the findings to a group at a search conference in San Jose.

So let’s try something a little less black and white. Let’s see how ChatGPT does when it has to freestyle a bit. I asked who I was.

The first paragraph was pretty accurate:

Gord Hotchkiss is a Canadian internet marketing and search engine optimization (SEO) expert. He is the CEO of a digital marketing agency called “Enquiro Search Solutions” which specializes in providing marketing and SEO services to businesses. Gord Hotchkiss is also a well-known speaker and author on the topics of online marketing and search engine optimization.

The second paragraph started a slight pumping of my ego, throwing in some accomplishments I either never did or else can’t remember doing:

Gord Hotchkiss has been involved in the online marketing industry for over 20 years and has worked with a wide range of clients, including Fortune 500 companies. He has written numerous articles on the topics of SEO, online marketing, and search engine technology for publications such as Search Engine Land, ClickZ, and MarketingProfs.

But in the third paragraph, ChatGPT definitely had me confused with someone else:

In addition to his work in the online marketing industry, Gord Hotchkiss is also a co-founder of the “Think Tank of Human Values in Business”, a non-profit organization that focuses on the intersection of business and ethics. He is known for his thought leadership in the industry and for advocating for ethical and sustainable business practices.

Okay, where the hell did that come from? It sounds extraordinarily cool and it’s something I would love to do, but it’s definitely not on my CV. As far as I’m aware, there is no such thing as the “Think Tank of Human Values in Business.” I wish there were, but there’s not.

I tried again. This time, ChatGPT seemed a little less sure of itself. The first sentence was somewhat correct, but after that ChatGPT seemed to manufacture my bio out of nothing but moonbeams and fairy dust. It said I was:

the CEO of a leading digital marketing agency, named “Digital Always Media Inc.” Gord is also a co-founder and chair of the “The Agents of Change” conference, an annual conference focused on helping businesses adapt to the ever-changing digital landscape.

Gord has written numerous articles and books on digital marketing and has been featured in several major media outlets, including The New York Times, The Wall Street Journal, and CNN.

I don’t know who this particular Gord Hotchkiss is, but I’d like to meet him. My AI doppelgänger is leaving me in the dust. I’d better get my shit together.

Or maybe ChatGPT can actually see into the future, showing me the person I just haven’t become yet.

What worries me is how easily ChatGPT constructed complete fabrications of who I was that sounded perfectly plausible. If I didn’t know myself better, I would have simply accepted it as fact. And that – unfortunately – is what we tend to do. We don’t analyze new information and determine if it’s right or wrong. Our default is to believe it’s true until proven otherwise.

It’s this human tendency that flags the real danger with ChatGPT. And, for that reason, I have some more digging to do.

Maybe this other Gord Hotchkiss guy can give me a hand. He sounds wicked smart.

(Image by Brian Penny — Pixabay license)

The Eternal Hatred of Interruptive Messages

Spamming and Phishing and Robocalls at Midnight
Pop ups and Autoplays and LinkedIn Requests from Salespeople

These are a few of my least favorite things

We all feel the excruciating pain of unsolicited demands on our attention. In a study of the 50 most annoying things in life of 2000 Brits by online security firm Kapersky, deleting spam email came in at number 4, behind scrubbing the bath, being trapped in voicemail hell and cleaning the oven.

Based on this study, cleanliness is actually next to spamminess.

Granted, Kapersky is a tech security firm so the results are probably biased to the digital side, but for me the results check out. As I ran down the list, I hated all the same things that were listed.

In the same study, Robocalls came in at number 10. Personally, that tops my list, especially phishing robocalls. I hate – hate – hate rushing to my phone only to hear that the IRS is going to prosecute me unless I immediately push 7 on my touchtone phone keyboard.

One, I’m Canadian. Two, go to Hell.

I spend more and more of my life trying to avoid marketers and scammers (the line between the two is often fuzzy) trying desperately to get my attention by any means possible. And it’s only going to get worse. A study just out showed that the ChatGPT AI chatbot could be a game changer for phishing, making scam emails harder to detect. And with Google’s Gmail filters already trapping 100 million phishing emails a day, that is not good news.

The marketers in my audience are probably outrunning Usain Bolt in their dash to distance themselves from spammers, but interruptive demands on our attention are on a spectrum that all share the same baseline. Any demand on our attention that we don’t ask for will annoy us. The only difference is the degree of annoyance.

Let’s look at the psychological mechanisms behind that annoyance.

There is a direct link between the parts of our brain that govern the focusing of attention and the parts that regulate our emotions. At its best, it’s called “flow” – a term coined by Mihaly Csikszentmihaly that describes a sense of full engagement and purpose. At its worst, it’s a feeling of anger and anxiety when we’re unwilling dragged away from the task at hand.

In a 2017 neurological study by Rejer and Jankowski, they found that when a participant’s cognitive processing of a task was interrupted by online ads, activity in the frontal and prefrontal cortex simply shut down while other parts of the brain significantly shifted activity, indicating a loss of focus and a downward slide in emotions.

Another study, by Edwards, Li and Lee, points the finger at something called Reactance Theory as a possible explanation. Very simply put, when something interrupts us, we perceive a loss of freedom to act as we wish and a loss of control of our environment. Again, we respond by getting angry.

It’s important to note that this negative emotional burden applies to any interruption that derails what we intend to do. It is not specific to advertising, but a lot of advertising falls into that category. It’s the nature of the interruption and our mental engagement with the task that determine the degree of negative emotion.

Take skimming through a news website, for instance. We are there to forage for information. We are not actively engaged in any specific task. And so being interrupted by an ad while in this frame of mind is minimally irritating.

But let’s imagine that a headline catches our attention, and we click to find out more. Suddenly, we’re interrupted by a pop-up or pre-roll video ad that hijacks our attention, forcing us to pause our intention and focus on irrelevant information. Our level of annoyance begins to rise quickly.

Robocalls fall into a different category of annoyance for many reasons. First, we have a conditioned response to phone calls where we hope to be rewarded by hearing from someone we know and care about. That’s what makes it so difficult to ignore a ringing phone.

Secondly, phone calls are extremely interruptive. We must literally drop whatever we’re doing to pick up a phone. When we go to all this effort only to realize we’ve been duped by an unsolicited and irrelevant call, the “red mist” starts to float over us.

You’ll note that – up to this point – I haven’t even dealt with the nature of the message. This has all been focused on the delivery of the message, which immediately puts us in a more negative mood. It doesn’t matter whether the message is about a service special for our vehicle, an opportunity to buy term life insurance or an attempt by a fictitious Nigerian prince to lighten the load of our bank account by several thousand dollars; whatever the message, we start in an irritated state simply due to the nature of the interruption.

Of course, the more nefarious the message that’s delivered, the more negative our emotional response will be. And this has a doubling down effect on any form of intrusive advertising. We learn to associate the delivery mechanism with attempts to defraud us. Any politician that depends on robocalls to raise awareness on the day before an election should ponder their ad-delivery mechanism.

My Many Problems with the Metaverse

I recently had dinner with a comedian who had just did his first gig in the Metaverse. It was in a new Meta-Comedy Club. He was excited and showed me a recording of the gig.

I have to admit, my inner geek thought it was very cool: disembodied hands clapping with avataresque names floating above, bursts of virtual confetti for the biggest laughs and even a virtual-hook that instantly snagged meta-hecklers, banning them to meta-purgatory until they promised to behave. The comedian said he wanted to record a comedy meta-album in the meta-club to release to his meta-followers.

It was all very meta.

As mentioned, as a geek I’m intrigued by the Metaverse. But as a human who ponders our future (probably more than is healthy) – I have grave concerns on a number of fronts. I have mentioned most of these individually in previous posts, but I thought it might be useful to round them up:

Removed from Reality

My first issue is that the Metaverse just isn’t real. It’s a manufactured reality. This is at the heart of all the other issues to come.

We might think we’re clever, and that we can manufacturer a better world than the one that nature has given us, but my response to that would be Orgel’s Second Rule, courtesy of Sir Francis Crick, co-discoverer of DNA: “Evolution is cleverer than you are.”

For millions of years, we have evolved to be a good fit in our natural environment. There are thousands of generations of trial and error baked into our DNA that make us effective in our reality. Most of that natural adaptation lies hidden from us, ticking away below the surface of both our bodies and brains, silently correcting course to keep us aligned and functioning well in our world.

But we, in our never-ending human hubris, somehow believe we can engineer an environment better than reality in less than a single generation. If we take Second Life as the first iteration of the metaverse, we’re barely two decades into the engineering of a meta-reality.

If I was placing bets on who is the better environmental designer for us, humans or evolution, my money would be on evolution, every time.

Who’s Law is It Anyway?

One of the biggest selling features of the Metaverse is that it frees us from the restrictions of geography. Physical distance has no meaning when we go meta.

But this also has issues. Societies need laws and our laws have evolved to be grounded within the boundaries of geographical jurisdictions. What happens when those geographical jurisdictions become meaningless? Right now, there are no laws specifically regulating the Metaverse. And even if there are laws in the future, in what jurisdiction would they be enforced?

This is a troubling loophole – and by hole I mean a massive gaping metaverse-sized void. You know who is attracted by a lack of laws? Those who have no regard for the law. If you don’t think that criminals are currently eyeing the metaverse looking for opportunity, I have a beautiful virtual time-share condo in the heart of meta-Boca Raton that I’d love to sell you.

Data is Matter of the Metaverse

Another “selling feature” for the metaverse is the ability to append metadata to our own experiences, enriching them with access to information and opportunities that would be impossible in the real world. In the metaverse, the world is at our fingertips – or in our virtual headset – as the case may be. We can stroll through worlds, real or imagined, and the sum of all our accumulated knowledge is just one user-prompt away.

But here’s the thing about this admittedly intriguing notion: it makes data a commodity and commodities are built to be exchanged based on market value. In order to get something of value, you have to exchange something of value. And for the builders of the metaverse, that value lies in your personal data. The last shreds of personal privacy protection will be gone, forever!

A For-Profit Reality

This brings us to my biggest problem with the Metaverse – the motivation for building it. It is being built not by philanthropists or philosophers, academics or even bureaucrats. The metaverse is being built by corporations, who have to hit quarterly profit projections. They are building it to make a buck, or, more correctly, several billion bucks.

These are the same people who have made social media addictive by taking the dirtiest secrets of Las Vegas casinos and using them to enslave us through our smartphones. They have toppled legitimate governments for the sake of advertising revenue. They have destroyed our concept of truth, bashed apart the soft guardrails of society and are currently dismantling democracy. There is no noble purpose for a corporation – their only purpose is profit.

Do you really want to put your future reality in those hands?

With Digital Friends Like These, Who Needs Enemies?

Recently, I received an email from Amazon that began:

“You’re amazing. Really, you’re awesome! Did that make you smile? Good. Alexa is here to compliment you. Just say, ‘Alexa, compliment me’”

“What,” I said to myself, “sorry-assed state is my life in that I need to depend on a little black electronic hockey puck to affirm my self-worth as a human being?”

I realize that the tone of the email likely had tongue at least part way implanted in cheek, but still, seriously – WTF Alexa? (Which, incidentally, Alexa also has covered. Poise that question and Alexa responds – “I’m always interested in feedback.”)

My next thought was, maybe I think this is a joke, but there are probably people out there that need this. Maybe their lives are dangling by a thread and it’s Alexa’s soothing voice digitally pumping their tires that keeps them hanging on until tomorrow. And – if that’s true – should I be the one to scoff at it?

I dug a little further into the question, “Can we depend on technology for friendship, for understanding, even – for love?”

The answer, it turns out, is probably yes.

A few studies have shown that we will share more with a virtual therapist than a human one in a face-to-face setting. We feel heard without feeling judged.

In another study, patients with a virtual nurse ended up creating a strong relationship with it that included:

  • Using close forms of greeting and goodbye
  • Expressing happiness to see the nurse
  • Using compliments
  • Engaging in social chat
  • And expressing a desire to work together and speak with the nurse again

Yet another study found that robots can even build a stronger relationship with us by giving us a pat on the hand or touching our shoulder. We are social animals and don’t do well when we lose that sociability. If we go too long without being touched, we experience something called “skin hunger” and start feeling stressed, depressed and anxious. The use of these robots is being tested in senior’s care facilities to help combat extreme loneliness.

In reading through these studies, I was amazed at how quickly respondents seemed to bond with their digital allies. We have highly evolved mechanisms that determine when and with whom we seem to place trust. In many cases, these judgements are based on non-verbal cues: body language, micro-expressions, even how people smell. It surprised me that when our digital friends presented none of these, the bonds still developed. In fact, it seems they were deeper and stronger than ever!

Perhaps it’s the very lack of humanness that is the explanation. As in the case of the success of a virtual therapist, maybe these relationships work because we can leave the baggage of being human behind. Virtual assistants are there to serve us, not judge or threaten us. We let our guards down and are more willing to open up.

Also, I suspect that the building blocks of these relationships are put in place not by the rational, thinking part of our brains but the emotional, feeling part. It’s been shown that self-affirmation works by activating the reward centers of our brain, the ventral striatum and ventromedial prefrontal cortex. These are not pragmatic, cautious parts of our cognitive machinery. As I’ve said before, they’re all gas and no brakes. We don’t think a friendship with a robot is weird because we don’t think about it at all, we just feel better. And that’s enough.

AI companionship seems a benign – even beneficial use of technology – but what might the unintended consequences be? Are we opening ourselves up to potential dangers by depending on AI for our social contact – especially when the lines are blurred between for-profit motives and affirmation we become dependent on.

In therapeutic use cases of virtual relationships as outlined up to now, there is no “for-profit” motive. But Amazon, Apple, Facebook, Google and the other providers of consumer directed AI companionship are definitely in it for the money. Even more troubling, two of those – Facebook and Google – depend on advertising for their revenue. Much as this gang would love us to believe that they only have our best interests in mind – over $1.2 trillion in combined revenue says otherwise. I suspect they have put a carefully calculated price on digital friendship.

Perhaps it’s that – more than anything – that threw up the red flags when I got that email from Amazon. It sounded like it was coming from a friend, and that’s exactly what worries me.

Making Time for Quadrant Two

Several years ago, I read Stephen Covey’s “The 7 Habits of Highly Effective People.” It had a lasting impact on me. Through my life, I have found myself relearning those lessons over and over again.

One of them was the four quadrants of time management. How we spend our time in these quadrants determines how effective we are.

 Imagine a box split into four quarters. On the upper left box, we’ll put a label: “Important and Urgent.” Next to it, in the upper right, we’ll put a label saying “Important But Not Urgent.” The label for the lower left is “Urgent but Not Important.” And the last quadrant — in the lower right — is labeled “Not Important nor Urgent.”

The upper left quadrant — “Important and Urgent” — is our firefighting quadrant. It’s the stuff that is critical and can’t be put off, the emergencies in our life.

We’ll skip over quadrant two — “Important But Not Urgent” — for a moment and come back to it.

In quadrant three — “Urgent But Not Important” — are the interruptions that other people brings to us. These are the times we should say, “That sounds like a you problem, not a me problem.”

Quadrant four is where we unwind and relax, occupying our minds with nothing at all in order to give our brains and body a chance to recharge. Bingeing Netflix, scrolling through Facebook or playing a game on our phones all fall into this quadrant.

And finally, let’s go back to quadrant two: “Important But Not Urgent.” This is the key quadrant. It’s here where long-term planning and strategy live. This is where we can see the big picture.

The secret of effective time management is finding ways to shift time spent from all the other quadrants into quadrant two. It’s managing and delegating emergencies from quadrant one, so we spend less time fire-fighting. It’s prioritizing our time above the emergencies of others, so we minimize interruptions in quadrant three. And it’s keeping just enough time in quadrant four to minimize stress and keep from being overwhelmed.

The lesson of the four quadrants came back to me when I was listening to an interview with Dr. Sandro Galea, epidemiologist and author of “The Contagion Next Time.” Dr. Galea was talking about how our health care system responded to the COVID pandemic. The entire system was suddenly forced into quadrant one. It was in crisis mode, trying desperately to keep from crashing. Galea reminded us that we were forced into this mode, despite there being hundreds of lengthy reports from previous pandemics — notably the SARS crisis–– containing thousands of suggestions that could have helped to partially mitigate the impact of COVID.

Few of those suggestions were ever implemented. Our health care system, Galea noted, tends to continually lurch back and forth within quadrant one, veering from crisis to crisis. When a crisis is over, rather than go to quadrant two and make the changes necessary to avoid similar catastrophes in the future, we put the inevitable reports on a shelf where they’re ignored until it is — once again — too late.

For me, that paralleled a theme I have talked about often in the past — how we tend to avoid grappling with complexity. Quadrant two stuff is, inevitably, complex in nature. The quadrant is jammed with what we call wicked problems. In a previous column, I described these as, “complex, dynamic problems that defy black-and-white solutions. These are questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough — for now.’”

That’s quadrant two in a nutshell. Quadrant-one problems must be triaged into a sort of false clarity. You have to deal with the critical stuff first. The nuances and complexity are, by necessity, ignored. That all gets pushed to quadrant two, where we say we will deal with it “someday.”

Of course, someday never comes. We either stay in quadrant one, are hijacked into quadrant three, or collapse through sheer burn-out into quadrant four. The stuff that waits for us in quadrant two is just too daunting to even consider tackling.

This has direct implications for technology and every aspect of the online world. Our industry, because of its hyper-compressed timelines and the huge dollars at stake, seems firmly lodged in the urgency of quadrant one. Everything on our to-do list tends to be a fire we have to put out. And that’s true even if we only consider the things we intentionally plan for. When we factor in the unplanned emergencies, quadrant one is a time-sucking vortex that leaves nothing for any of the other quadrants.

But there is a seemingly infinite number of quadrant two things we should be thinking about. Take social media and privacy, for example. When an online platform has a massive data breach, that is a classic quadrant one catastrophe. It’s all hands on deck to deal with the crisis. But all the complex questions around what our privacy might look like in a data-inundated world falls into quadrant two. As such, they are things we don’t think much about. It’s important, but it’s not urgent.

Quadrant two thinking is systemic thinking, long-term and far-reaching. It allows us to build the foundations that helps to mitigate crisis and minimize unintended consequences.

In a world that seems to rush from fire to fire, it is this type of thinking that could save our asses.