The Eternal Hatred of Interruptive Messages

Spamming and Phishing and Robocalls at Midnight
Pop ups and Autoplays and LinkedIn Requests from Salespeople

These are a few of my least favorite things

We all feel the excruciating pain of unsolicited demands on our attention. In a study of the 50 most annoying things in life of 2000 Brits by online security firm Kapersky, deleting spam email came in at number 4, behind scrubbing the bath, being trapped in voicemail hell and cleaning the oven.

Based on this study, cleanliness is actually next to spamminess.

Granted, Kapersky is a tech security firm so the results are probably biased to the digital side, but for me the results check out. As I ran down the list, I hated all the same things that were listed.

In the same study, Robocalls came in at number 10. Personally, that tops my list, especially phishing robocalls. I hate – hate – hate rushing to my phone only to hear that the IRS is going to prosecute me unless I immediately push 7 on my touchtone phone keyboard.

One, I’m Canadian. Two, go to Hell.

I spend more and more of my life trying to avoid marketers and scammers (the line between the two is often fuzzy) trying desperately to get my attention by any means possible. And it’s only going to get worse. A study just out showed that the ChatGPT AI chatbot could be a game changer for phishing, making scam emails harder to detect. And with Google’s Gmail filters already trapping 100 million phishing emails a day, that is not good news.

The marketers in my audience are probably outrunning Usain Bolt in their dash to distance themselves from spammers, but interruptive demands on our attention are on a spectrum that all share the same baseline. Any demand on our attention that we don’t ask for will annoy us. The only difference is the degree of annoyance.

Let’s look at the psychological mechanisms behind that annoyance.

There is a direct link between the parts of our brain that govern the focusing of attention and the parts that regulate our emotions. At its best, it’s called “flow” – a term coined by Mihaly Csikszentmihaly that describes a sense of full engagement and purpose. At its worst, it’s a feeling of anger and anxiety when we’re unwilling dragged away from the task at hand.

In a 2017 neurological study by Rejer and Jankowski, they found that when a participant’s cognitive processing of a task was interrupted by online ads, activity in the frontal and prefrontal cortex simply shut down while other parts of the brain significantly shifted activity, indicating a loss of focus and a downward slide in emotions.

Another study, by Edwards, Li and Lee, points the finger at something called Reactance Theory as a possible explanation. Very simply put, when something interrupts us, we perceive a loss of freedom to act as we wish and a loss of control of our environment. Again, we respond by getting angry.

It’s important to note that this negative emotional burden applies to any interruption that derails what we intend to do. It is not specific to advertising, but a lot of advertising falls into that category. It’s the nature of the interruption and our mental engagement with the task that determine the degree of negative emotion.

Take skimming through a news website, for instance. We are there to forage for information. We are not actively engaged in any specific task. And so being interrupted by an ad while in this frame of mind is minimally irritating.

But let’s imagine that a headline catches our attention, and we click to find out more. Suddenly, we’re interrupted by a pop-up or pre-roll video ad that hijacks our attention, forcing us to pause our intention and focus on irrelevant information. Our level of annoyance begins to rise quickly.

Robocalls fall into a different category of annoyance for many reasons. First, we have a conditioned response to phone calls where we hope to be rewarded by hearing from someone we know and care about. That’s what makes it so difficult to ignore a ringing phone.

Secondly, phone calls are extremely interruptive. We must literally drop whatever we’re doing to pick up a phone. When we go to all this effort only to realize we’ve been duped by an unsolicited and irrelevant call, the “red mist” starts to float over us.

You’ll note that – up to this point – I haven’t even dealt with the nature of the message. This has all been focused on the delivery of the message, which immediately puts us in a more negative mood. It doesn’t matter whether the message is about a service special for our vehicle, an opportunity to buy term life insurance or an attempt by a fictitious Nigerian prince to lighten the load of our bank account by several thousand dollars; whatever the message, we start in an irritated state simply due to the nature of the interruption.

Of course, the more nefarious the message that’s delivered, the more negative our emotional response will be. And this has a doubling down effect on any form of intrusive advertising. We learn to associate the delivery mechanism with attempts to defraud us. Any politician that depends on robocalls to raise awareness on the day before an election should ponder their ad-delivery mechanism.

My Many Problems with the Metaverse

I recently had dinner with a comedian who had just did his first gig in the Metaverse. It was in a new Meta-Comedy Club. He was excited and showed me a recording of the gig.

I have to admit, my inner geek thought it was very cool: disembodied hands clapping with avataresque names floating above, bursts of virtual confetti for the biggest laughs and even a virtual-hook that instantly snagged meta-hecklers, banning them to meta-purgatory until they promised to behave. The comedian said he wanted to record a comedy meta-album in the meta-club to release to his meta-followers.

It was all very meta.

As mentioned, as a geek I’m intrigued by the Metaverse. But as a human who ponders our future (probably more than is healthy) – I have grave concerns on a number of fronts. I have mentioned most of these individually in previous posts, but I thought it might be useful to round them up:

Removed from Reality

My first issue is that the Metaverse just isn’t real. It’s a manufactured reality. This is at the heart of all the other issues to come.

We might think we’re clever, and that we can manufacturer a better world than the one that nature has given us, but my response to that would be Orgel’s Second Rule, courtesy of Sir Francis Crick, co-discoverer of DNA: “Evolution is cleverer than you are.”

For millions of years, we have evolved to be a good fit in our natural environment. There are thousands of generations of trial and error baked into our DNA that make us effective in our reality. Most of that natural adaptation lies hidden from us, ticking away below the surface of both our bodies and brains, silently correcting course to keep us aligned and functioning well in our world.

But we, in our never-ending human hubris, somehow believe we can engineer an environment better than reality in less than a single generation. If we take Second Life as the first iteration of the metaverse, we’re barely two decades into the engineering of a meta-reality.

If I was placing bets on who is the better environmental designer for us, humans or evolution, my money would be on evolution, every time.

Who’s Law is It Anyway?

One of the biggest selling features of the Metaverse is that it frees us from the restrictions of geography. Physical distance has no meaning when we go meta.

But this also has issues. Societies need laws and our laws have evolved to be grounded within the boundaries of geographical jurisdictions. What happens when those geographical jurisdictions become meaningless? Right now, there are no laws specifically regulating the Metaverse. And even if there are laws in the future, in what jurisdiction would they be enforced?

This is a troubling loophole – and by hole I mean a massive gaping metaverse-sized void. You know who is attracted by a lack of laws? Those who have no regard for the law. If you don’t think that criminals are currently eyeing the metaverse looking for opportunity, I have a beautiful virtual time-share condo in the heart of meta-Boca Raton that I’d love to sell you.

Data is Matter of the Metaverse

Another “selling feature” for the metaverse is the ability to append metadata to our own experiences, enriching them with access to information and opportunities that would be impossible in the real world. In the metaverse, the world is at our fingertips – or in our virtual headset – as the case may be. We can stroll through worlds, real or imagined, and the sum of all our accumulated knowledge is just one user-prompt away.

But here’s the thing about this admittedly intriguing notion: it makes data a commodity and commodities are built to be exchanged based on market value. In order to get something of value, you have to exchange something of value. And for the builders of the metaverse, that value lies in your personal data. The last shreds of personal privacy protection will be gone, forever!

A For-Profit Reality

This brings us to my biggest problem with the Metaverse – the motivation for building it. It is being built not by philanthropists or philosophers, academics or even bureaucrats. The metaverse is being built by corporations, who have to hit quarterly profit projections. They are building it to make a buck, or, more correctly, several billion bucks.

These are the same people who have made social media addictive by taking the dirtiest secrets of Las Vegas casinos and using them to enslave us through our smartphones. They have toppled legitimate governments for the sake of advertising revenue. They have destroyed our concept of truth, bashed apart the soft guardrails of society and are currently dismantling democracy. There is no noble purpose for a corporation – their only purpose is profit.

Do you really want to put your future reality in those hands?

With Digital Friends Like These, Who Needs Enemies?

Recently, I received an email from Amazon that began:

“You’re amazing. Really, you’re awesome! Did that make you smile? Good. Alexa is here to compliment you. Just say, ‘Alexa, compliment me’”

“What,” I said to myself, “sorry-assed state is my life in that I need to depend on a little black electronic hockey puck to affirm my self-worth as a human being?”

I realize that the tone of the email likely had tongue at least part way implanted in cheek, but still, seriously – WTF Alexa? (Which, incidentally, Alexa also has covered. Poise that question and Alexa responds – “I’m always interested in feedback.”)

My next thought was, maybe I think this is a joke, but there are probably people out there that need this. Maybe their lives are dangling by a thread and it’s Alexa’s soothing voice digitally pumping their tires that keeps them hanging on until tomorrow. And – if that’s true – should I be the one to scoff at it?

I dug a little further into the question, “Can we depend on technology for friendship, for understanding, even – for love?”

The answer, it turns out, is probably yes.

A few studies have shown that we will share more with a virtual therapist than a human one in a face-to-face setting. We feel heard without feeling judged.

In another study, patients with a virtual nurse ended up creating a strong relationship with it that included:

  • Using close forms of greeting and goodbye
  • Expressing happiness to see the nurse
  • Using compliments
  • Engaging in social chat
  • And expressing a desire to work together and speak with the nurse again

Yet another study found that robots can even build a stronger relationship with us by giving us a pat on the hand or touching our shoulder. We are social animals and don’t do well when we lose that sociability. If we go too long without being touched, we experience something called “skin hunger” and start feeling stressed, depressed and anxious. The use of these robots is being tested in senior’s care facilities to help combat extreme loneliness.

In reading through these studies, I was amazed at how quickly respondents seemed to bond with their digital allies. We have highly evolved mechanisms that determine when and with whom we seem to place trust. In many cases, these judgements are based on non-verbal cues: body language, micro-expressions, even how people smell. It surprised me that when our digital friends presented none of these, the bonds still developed. In fact, it seems they were deeper and stronger than ever!

Perhaps it’s the very lack of humanness that is the explanation. As in the case of the success of a virtual therapist, maybe these relationships work because we can leave the baggage of being human behind. Virtual assistants are there to serve us, not judge or threaten us. We let our guards down and are more willing to open up.

Also, I suspect that the building blocks of these relationships are put in place not by the rational, thinking part of our brains but the emotional, feeling part. It’s been shown that self-affirmation works by activating the reward centers of our brain, the ventral striatum and ventromedial prefrontal cortex. These are not pragmatic, cautious parts of our cognitive machinery. As I’ve said before, they’re all gas and no brakes. We don’t think a friendship with a robot is weird because we don’t think about it at all, we just feel better. And that’s enough.

AI companionship seems a benign – even beneficial use of technology – but what might the unintended consequences be? Are we opening ourselves up to potential dangers by depending on AI for our social contact – especially when the lines are blurred between for-profit motives and affirmation we become dependent on.

In therapeutic use cases of virtual relationships as outlined up to now, there is no “for-profit” motive. But Amazon, Apple, Facebook, Google and the other providers of consumer directed AI companionship are definitely in it for the money. Even more troubling, two of those – Facebook and Google – depend on advertising for their revenue. Much as this gang would love us to believe that they only have our best interests in mind – over $1.2 trillion in combined revenue says otherwise. I suspect they have put a carefully calculated price on digital friendship.

Perhaps it’s that – more than anything – that threw up the red flags when I got that email from Amazon. It sounded like it was coming from a friend, and that’s exactly what worries me.

Making Time for Quadrant Two

Several years ago, I read Stephen Covey’s “The 7 Habits of Highly Effective People.” It had a lasting impact on me. Through my life, I have found myself relearning those lessons over and over again.

One of them was the four quadrants of time management. How we spend our time in these quadrants determines how effective we are.

 Imagine a box split into four quarters. On the upper left box, we’ll put a label: “Important and Urgent.” Next to it, in the upper right, we’ll put a label saying “Important But Not Urgent.” The label for the lower left is “Urgent but Not Important.” And the last quadrant — in the lower right — is labeled “Not Important nor Urgent.”

The upper left quadrant — “Important and Urgent” — is our firefighting quadrant. It’s the stuff that is critical and can’t be put off, the emergencies in our life.

We’ll skip over quadrant two — “Important But Not Urgent” — for a moment and come back to it.

In quadrant three — “Urgent But Not Important” — are the interruptions that other people brings to us. These are the times we should say, “That sounds like a you problem, not a me problem.”

Quadrant four is where we unwind and relax, occupying our minds with nothing at all in order to give our brains and body a chance to recharge. Bingeing Netflix, scrolling through Facebook or playing a game on our phones all fall into this quadrant.

And finally, let’s go back to quadrant two: “Important But Not Urgent.” This is the key quadrant. It’s here where long-term planning and strategy live. This is where we can see the big picture.

The secret of effective time management is finding ways to shift time spent from all the other quadrants into quadrant two. It’s managing and delegating emergencies from quadrant one, so we spend less time fire-fighting. It’s prioritizing our time above the emergencies of others, so we minimize interruptions in quadrant three. And it’s keeping just enough time in quadrant four to minimize stress and keep from being overwhelmed.

The lesson of the four quadrants came back to me when I was listening to an interview with Dr. Sandro Galea, epidemiologist and author of “The Contagion Next Time.” Dr. Galea was talking about how our health care system responded to the COVID pandemic. The entire system was suddenly forced into quadrant one. It was in crisis mode, trying desperately to keep from crashing. Galea reminded us that we were forced into this mode, despite there being hundreds of lengthy reports from previous pandemics — notably the SARS crisis–– containing thousands of suggestions that could have helped to partially mitigate the impact of COVID.

Few of those suggestions were ever implemented. Our health care system, Galea noted, tends to continually lurch back and forth within quadrant one, veering from crisis to crisis. When a crisis is over, rather than go to quadrant two and make the changes necessary to avoid similar catastrophes in the future, we put the inevitable reports on a shelf where they’re ignored until it is — once again — too late.

For me, that paralleled a theme I have talked about often in the past — how we tend to avoid grappling with complexity. Quadrant two stuff is, inevitably, complex in nature. The quadrant is jammed with what we call wicked problems. In a previous column, I described these as, “complex, dynamic problems that defy black-and-white solutions. These are questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough — for now.’”

That’s quadrant two in a nutshell. Quadrant-one problems must be triaged into a sort of false clarity. You have to deal with the critical stuff first. The nuances and complexity are, by necessity, ignored. That all gets pushed to quadrant two, where we say we will deal with it “someday.”

Of course, someday never comes. We either stay in quadrant one, are hijacked into quadrant three, or collapse through sheer burn-out into quadrant four. The stuff that waits for us in quadrant two is just too daunting to even consider tackling.

This has direct implications for technology and every aspect of the online world. Our industry, because of its hyper-compressed timelines and the huge dollars at stake, seems firmly lodged in the urgency of quadrant one. Everything on our to-do list tends to be a fire we have to put out. And that’s true even if we only consider the things we intentionally plan for. When we factor in the unplanned emergencies, quadrant one is a time-sucking vortex that leaves nothing for any of the other quadrants.

But there is a seemingly infinite number of quadrant two things we should be thinking about. Take social media and privacy, for example. When an online platform has a massive data breach, that is a classic quadrant one catastrophe. It’s all hands on deck to deal with the crisis. But all the complex questions around what our privacy might look like in a data-inundated world falls into quadrant two. As such, they are things we don’t think much about. It’s important, but it’s not urgent.

Quadrant two thinking is systemic thinking, long-term and far-reaching. It allows us to build the foundations that helps to mitigate crisis and minimize unintended consequences.

In a world that seems to rush from fire to fire, it is this type of thinking that could save our asses.

The Unusual Evolution of the Internet

The Internet we have today evolved out of improbability. It shouldn’t have happened like it did. It evolved as a wide-open network forged by starry-eyed academics and geeks who really believed it might make the world better. It wasn’t supposed to win against walled gardens like Compuserve, Prodigy and AOL — but it did. If you rolled back the clock, knowing what we know now, you could be sure it would never play out the same way again.

To use the same analogy that Eric Raymond did in his now-famous essay on the development of Linux, these were people who believed in bazaars rather than cathedrals. The internet was cobbled together to scratch an intellectual and ethical itch, rather than a financial one.

But today, as this essay in The Atlantic by Jonathan Zittrain warns us, the core of the internet is rotting. Because it was built by everyone and no one, all the superstructure that was assembled on top of that core is teetering. Things work, until they don’t: “The internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks.”

The problem is, it’s no one’s job to make sure those bricks stay in place.

Zittrain talks about the holes in humanity’s store of knowledge. But there’s another thing about this evolution that is either maddening or magical, depending on your perspective: It was never built with a business case in mind.

Eventually, commerce pipes were retrofitted into the whole glorious mess, and billions managed to be made. Google alone has managed to pull over a trillion dollars in revenue in less than 20 years by becoming the de facto index to the world’s most haphazard library of digital stuff. Amazon went one better, using the Internet to reinvent humanity’s marketplace and pulling in $2 trillion in revenue along the way.

But despite all this massive monetization, the benefactors still at least had to pay lip service to that original intent: the naïve belief that technology could make us better, and  that it didn’t just have to be about money.

Even Google, which is on its way to posting $200 billion in revenue, making it the fifth biggest media company in the world (after Netflix, Disney, Comcast, and AT&T), stumbled on its way to making a buck. Perhaps it’s because its founders, Larry Page and Sergey Brin, didn’t trust advertising. In their original academic paper, they said that “advertising-funded search engines will inherently be biased toward the advertisers and away from the needs of consumers.”  Of course they ultimately ended up giving in to the dark side of advertising. But I watched the Google user experience closely from 2003 to 2011, and that dedication to the user was always part of a delicate balancing act that was generally successful.

But that innocence of the original Internet is almost gone, as I noted in a recent post. And there are those who want to make sure that the next thing — whatever it is — is built on a framework that has monetization built in. It’s why Mark Zuckerberg is feverishly hoping that his company can build the foundations of the Metaverse. It’s why Google is trying to assemble the pipes and struts that build the new web. Those things would be completely free of the moral — albeit naïve — constraints that still linger in the original model. In the new one, there would only be one goal: making sure shareholders are happy.

It’s also natural that many of those future monetization models will likely embrace advertising, which is, as I’ve said before, the path of least resistance to profitability.

We should pay attention to this. The very fact that the Internet’s original evolution was as improbable and profit-free as it was puts us in a unique position today. What would it look like if things had turned out differently, and the internet had been profit-driven from day one? I suspect it might have been better-maintained but a lot less magical, at least in its earliest iterations.

Whatever that new thing is will form a significant part of our reality. It will be even more foundational and necessary to us than the current internet. We won’t be able to live without it. For that reason, we should worry about the motives that may lie behind whatever “it” will be.

The Relationship between Trust and Tech: It’s Complicated

Today, I wanted to follow up on last week’s post about not trusting tech companies with your privacy. In that post, I said, “To find a corporation’s moral fiber, you always, always, always have to follow the money.”

A friend from back in my industry show days — the always insightful Brett Tabke — reached out to me to comment, and mentioned that the position taken by Apple in the current privacy brouhaha with Facebook is one of convenience, especially this “holier-than-thou” privacy stand adopted by Tim Cook and Apple.

“I really wonder though if it is a case of do-the-right-thing privacy moral stance, or one of convenience that supports their ecosystem, and attacks a competitor?” he asked.

It’s hard to argue against that. As Brett mentioned, Apple really can’t lose by “taking money out of a side-competitors pocket and using it to lay more foundational corner stones in the walled garden, [which] props up the illusion that the garden is a moral feature, and not a criminal antitrust offence.”

But let’s look beyond Facebook and Apple for a moment. As Brett also mentioned to me, “So who does a privacy action really impact more? Does it hit Facebook or ultimately Google? Facebook is just collateral damage here in the real war with Google. Apple and Google control their own platform ecosystems, but only Google can exert influence over the entire web. As we learned from the unredacted documents in the States vs Google antitrust filings, Google is clearly trying to leverage its assets to exert that control — even when ethically dubious.”

So, if we are talking trust and privacy, where is Google in this debate? Given the nature of Google’s revenue stream, its stand on privacy is not quite as blatantly obvious (or as self-serving) as Facebook’s. Both depend on advertising to pay the bills, but the nature of that advertising is significantly different.

57% of Alphabet’s (Google’s parent company) annual $182-billion revenue stream still comes from search ads, according to its most recent annual report. And search advertising is relatively immune from crackdowns on privacy.

When you search for something on Google, you have already expressed your intent, which is the clearest possible signal with which you can target advertising. Yes, additional data taken with or without your knowledge can help fine-tune ad delivery — and Google has shown it’s certainly not above using this  — but Apple tightening up its data security will not significantly impair Google’s ability to make money through its search revenue channel.

Facebook’s advertising model, on the other hand, targets you well before any expression of intent. For that reason, it has to rely on behavioral data and other targeting to effectively deliver those ads. Personal data is the lifeblood of such targeting. Turn off the tap, and Facebook’s revenue model dries up instantly.

But Google has always had ambitions beyond search revenue. Even today, 43% of its revenue comes from non-search sources. Google has always struggled with the inherently capped nature of search-based ad inventory. There are only so many searches done against which you can serve advertising. And, as Brett points out, that leads Google to look at the very infrastructure of the web to find new revenue sources. And that has led to signs of a troubling collusion with Facebook.

Again, we come back to my “follow the money” mantra for rooting out rot in the system. And in this case, the money we’re talking about is the premium that Google skims off the top when it determines which ads are shown to you. That premium depends on Google’s ability to use data to target the most effective ads possible through its own “Open Bidding” system. According to the unredacted documents released in the antitrust suit, that premium can amount to 22% to 42% of the ad spend that goes through that system.

In summing up, it appears that if you want to know who can be trusted most with your data, it’s the companies that don’t depend on that data to support an advertising revenue model. Right now, that’s Apple. But as Brett also pointed out, don’t mistake this for any warm, fuzzy feeling that Apple is your knight in shining armour: “Apple has shown time and time again they are willing to sacrifice strong desires of customers in order to make money and control the ecosystem. Can anyone look past headphone jacks, Macbook jacks, or the absence of Macbook touch screens without getting the clear indication that these were all robber-baronesque choices of a monopoly in action? Is so, then how can we go ‘all in’ on privacy with them just because we agree with the stance?”

The Tech Giant Trust Exercise

If we look at those that rule in the Valley of Silicon — the companies that determine our technological future — it seems, as I previously wrote,  that Apple alone is serious about protecting our privacy. 

MediaPost editor in chief Joe Mandese shared a post late last month about how Apple’s new privacy features are increasingly taking aim at the various ways in which advertising can be targeted to specific consumers. The latest victim in those sights is geotargeting.

Then Steve Rosenbaum mentioned last week that as Apple and Facebook gird their loins and prepare to do battle over the next virtual dominion — the metaverse — they are taking two very different approaches. Facebook sees this next dimension as an extension of its hacker mentality, a “raw, nasty networker of spammers.” Apple is, as always, determined to exert a top-down restriction on who plays in its sandbox, only welcoming those who are willing to play by its rules. In that approach, the company is also signaling that it will take privacy in the metaverse seriously. Apple CEO Tim Cook said he believes “users should have the choice over the data that is being collected about them and how it’s used.”

Apple can take this stand because its revenue model doesn’t depend on advertising. To find a corporation’s moral fiber, you always, always, always have to follow the money. Facebook depends on advertising for revenue. And it has repeatedly shown it doesn’t really give a damn about protecting the privacy of users. Apple, on the other hand, takes every opportunity to unfurl the privacy banner as its battle standard because its revenue stream isn’t really impacted by privacy.

If you’re looking for the rot at the roots of technology, a good place to start is at anything that relies on advertising. In my 40 years in marketing, I have come to the inescapable conclusion that it is impossible for business models that rely on advertising as their primary source of revenue to stay on the right side of privacy concerns. There is an inherent conflict that cannot be resolved. In a recent earnings call,  Facebook CEO Mark Zuckerberg said it in about the clearest way it could be said, “As expected, we did experience revenue headwinds this quarter, including from Apple’s [privacy rule] changes that are not only negatively affecting our business, but millions of small businesses in what is already a difficult time for them in the economy.”

Facebook has proven time and time again that when the need for advertising revenue runs up against a question of ethical treatment of users, it will always be the ethics that give way.

It’s also interesting that Europe is light years ahead of North America in introducing legislation that protects privacy. According to one Internet Privacy Ranking study, four of the five top countries for protecting privacy are in Northern Europe. Australia is the fifth. My country, Canada, shares these characteristics. We rank seventh. The US ranks 18th.

There is an interesting corollary here I’ve touched on before. All these top-ranked countries are social democracies. All have strong public broadcasting systems. All have a very different relationship with advertising than the U.S. We that live in these countries are not immune from the dangers of advertising (this is certainly true for Canada), but our media structure is not wholly dependent on it. The U.S., right from the earliest days of electronic media, took a different path — one that relied almost exclusively on advertising to pay the bills.

As we start thinking about things like the metaverse or other forms of reality that are increasingly intertwined with technology, this reliance on advertising-funded platforms is something we must consider long and hard. It won’t be the companies that initiate the change. An advertising-based business model follows the path of least resistance, making it the shortest route to that mythical unicorn success story. The only way this will change will be if we — as users — demand that it changes.

And we should  — we must — demand it. Ad-based tech giants that have no regard for our personal privacy are one of the greatest threats we face. The more we rely on them, they more they will ask from us.

The Privacy War Has Begun

It started innocently enough….

My iPhone just upgraded itself to iOS 14.6, and the privacy protection purge began.

In late April,  Apple added App Tracking Transparency (ATT) to iOS (actually in 14.5 but for reasons mentioned in this Forbes article, I hadn’t noticed the change until the most recent update). Now, whenever I launch an app that is part of the online ad ecosystem, I’m asked whether I want to share data to enable tracking. I always opt out.

These alerts have been generally benign. They reference benefits like “more relevant ads,” a “customized experience” and “helping to support us.” Some assume you’re opting in and opting out is a much more circuitous and time-consuming process. Most also avoid the words “tracking” and “privacy.” One referred to it in these terms: “Would you allow us to refer to your activity?”

My answer is always no. Why would I want to customize an annoyance and make it more relevant?

All in all, it’s a deceptively innocent wrapper to put on what will prove to be a cataclysmic event in the world of online advertising. No wonder Facebook is fighting it tooth and nail, as I noted in a recent post.

This shot across the bow of online advertising marks an important turning point for privacy. It’s the first time that someone has put users ahead of advertisers. Everything up to now has been lip service from the likes of Facebook, telling us we have complete control over our privacy while knowing that actually protecting that privacy would be so time-consuming and convoluted that the vast majority of us would do nothing, thus keeping its profitability flowing through the pipeline.

The simple fact of the matter is that without its ability to micro-target, online advertising just isn’t that effective. Take away the personal data, and online ads are pretty non-engaging. Also, given our continually improving ability to filter out anything that’s not directly relevant to whatever we’re doing at the time, these ads are very easy to ignore.

Advertisers need that personal data to stand any chance of piercing our non-attentiveness long enough to get a conversion. It’s always been a crapshoot, but Apple’s ATT just stacked the odds very much against the advertiser.

It’s about time. Facebook and online ad platforms have had little to no real pushback against the creeping invasion of our privacy for years now. We have no idea how extensive and invasive this tracking has been. The only inkling we get is when the targeting nails the ad delivery so well that we swear our phone is listening to our conversations. And, in a way, it is. We are constantly under surveillance.

In addition to Facebook’s histrionic bitching about Apple’s ATT, others have started to find workarounds, as reported on 9 to 5 Mac. ATT specifically targets the IDFA (Identified for Advertisers), which offers cross app tracking by a unique identifier. Chinese ad networks backed by the state-endorsed Chinese Advertising Association were encouraging the adoption of CAID identifiers as an alternative to IDFA. Apple has gone on record as saying ATT will be globally implemented and enforced. While CAID can’t be policed at the OS level, Apple has said that apps that track users without their consent by any means, including CAID, could be removed from the App Store.

We’ll see. Apple doesn’t have a very consistent track record with it comes to holding the line against Chinese app providers. WeChat, for one, has been granted exceptions to Apple’s developer restrictions that have not been extended to anyone else.

For its part, Google has taken a tentative step toward following Apple’s lead with its new privacy initiative on Android devices, as reported in Slash Gear. Google Play has asked developers to share what data they collect and how they use that data. At this point, they won’t be requiring opt-in prompts as Apple does.

All of this marks a beginning. If it continues, it will throw a Kong-sized monkey wrench into the works of online advertising. The entire ecosystem is built on ad-supported models that depend on collecting and storing user data. Apple has begun nibbling away at that foundation.

The toppling has begun.

Splitting Ethical Hairs in an Online Ecosystem

In looking for a topic for today’s post, I thought it might be interesting to look at the Lincoln Project. My thought was that it would be an interesting case study in how to use social media effectively.

But what I found is that the Lincoln Project is currently imploding due to scandal. And you know what? I wasn’t surprised. Disappointed? Yes. Surprised? No.

While we on the left of the political spectrum may applaud what the Lincoln Project was doing, let’s make no mistake about the tactics used. It was the social media version of Nixon’s Dirty Tricks. The whole purpose was to bait Trump into engaging in a social media brawl. This was political mudslinging, as practiced by veteran warriors. The Lincoln Project was comfortable with getting down and dirty.

Effective? Yes. Ethical? Borderline.

But what it did highlight is the sordid but powerful force of social media influence. And it’s not surprising that those with questionable ethics, as some of the Lincoln Project leaders have proven to be, were attracted to it.

Social media is the single biggest and most effective influencer on human behavior ever invented. And that should scare the hell out of us, because it’s an ecosystem in which sociopaths will thrive.

A definition of Antisocial Personality Disorder (the condition from which sociopaths suffer) states, “People with ASPD may also use ‘mind games’ to control friends, family members, co-workers, and even strangers. They may also be perceived as charismatic or charming.”

All you have to do is substitute “social media” for “mind games,” and you’ll get my point.  Social media is sociopathy writ large.

That’s why we — meaning marketers — have to be very careful what we wish for. Since Google cracked down on personally identifiable information, following in the footsteps of Apple, there has been a great hue and cry from the ad-tech community about the unfairness of it all. Some of that hue and cry has issued forth here at MediaPost, like Ted McConnell’s post a few weeks ago, “Data Winter is Coming.”

And it is data that’s at the center of all this. Social media continually pumps personal data into the online ecosystem. And it’s this data that is the essential life force of the ecosystem. Ad tech sucks up that data as a raw resource and uses it for ad delivery across multiple channels. That’s the whole point of the personal identifiers that Apple and Google are cracking down on.

I suppose one could  draw an artificial boundary between social media and ad targeting in other channels, but that would be splitting hairs. It’s all part of the same ecosystem. Marketers want the data, no matter where it comes from, and they want it tied to an individual to make targeting their campaigns more effective.

By building and defending an ecosystem that enables sociopathic predators, we are contributing to the problem. McConnell and I are on opposite sides of the debate here. While I don’t disagree with some of his technical points about the efficacy of Google and Apple’s moves to protect privacy, there is a much bigger question here for marketers: Should we protect user privacy, even if it makes our jobs harder?

There has always been a moral ambiguity with marketers that I find troubling. To be honest, it’s why I finally left this industry. I was tired of the “yes, but” justification that ignored all the awful things that were happening for the sake of a handful of examples that showed the industry in a better light.

And let’s just be honest about this for a second: using personally identifiable data to build a more effective machine to influence people is an awful thing. Can it be used for good? Yes. Will it be? Not if the sociopaths have anything to say about it. It’s why the current rogue’s gallery of awful people are all scrambling to carve out as big a piece of the online ecosystem as they can.

Let’s look at nature as an example. In biology, a complex balance has evolved between predators and prey. If predators are too successful, they will eliminate their prey and will subsequently starve. So a self-limiting cycle emerges to keep everything in balance. But if the limits are removed on predators, the balance is lost. The predators are free to gorge themselves.

When it comes to our society, social media has removed the limits on “prey.” Right now, there is a never-ending supply.

It’s like we’re building a hen house, inviting a fox inside and then feigning surprise when the shit hits the fan. What the hell did we expect?

Facebook Vs. Apple Vs. Your Privacy

As I was writing last week’s words about Mark Zuckerberg’s hubris-driven view of world domination, little did I know that the next chapter was literally being written. The very next day, a full-page ad from Facebook ran in The New York Times, The Washington Post and The Wall Street Journal attacking Apple for building privacy protection prompts into iOS 14.

It will come as a surprise to no one that I line up firmly on the side of Apple in this cat fight. I have always said we need to retain control over our personal data, choosing what’s shared and when. I also believe we need to have more control over the nature of the data being shared. iOS 14 is taking some much-needed steps in that direction.

Facebook is taking a stand that sadly underlines everything I wrote just last week — a disingenuous stand for a free-market environment — by unfurling the “Save small business” banner. Zuckerberg loves to stand up for “free” things — be it speech or markets — when it serves his purpose.

And the hidden agenda here is not really hidden at all. It’s not the small business around the corner Mark is worried about. It’s the 800-billion-dollar business that he owns 60% of the voting shares in.

The headline of the ad reads, “We’re standing up to Apple for small businesses everywhere.”

Ummm — yeah, right.

What you’re standing up for, Mark, is your revenue model, which depends on Facebook’s being free to hoover up as much personal data on you as possible, across as many platforms as possible.

The only thing that you care about when it comes to small businesses is that they spend as much with Facebook as possible. What you’re trying to defend is not “free” markets or “free” speech. What you’re defending is about the furthest thing imaginable away from  “free.”  It’s $70 billion plus in revenues and $18 and a half billion in profits. What you’re trying to protect is your number-five slot on the Forbes richest people in the world list, with your net worth of $100 billion.

Then, on the very next day, Facebook added insult to injury with a second ad, this time defending the “Free Internet,”  saying Apple “will change the internet as we know it” by forcing websites and blogs “to start charging you subscription fees.”

Good. The “internet as we know it” is a crap sandwich. “Free” has led us to exactly where we are now, with democracy hanging on by a thread, with true journalism in the last paroxysms of its battle for survival, and with anyone with half a brain feeling like they’re swimming in a sea of stupidity.

Bravo to Apple for pushing us away from the toxicity of “free” that comes with our enthralled reverence for “free” things to prop up a rapidly disintegrating information marketplace. If we accept a free model for our access to information, we must also accept advertising that will become increasingly intrusive, with even less regard for our personal privacy. We must accept all the things that come with “free”: the things that have proven to be so detrimental to our ability to function as a caring and compassionate democratic society over the past decade.

In doing the research for this column, I ran into an op-ed piece that ran last year in The New York Times. In it, Facebook co-founder Chris Hughes lays out the case for antitrust regulators dismantling Facebook’s dominance in social media.

This is a guy who was one of Zuckerberg’s best friends in college, who shared in the thrill of starting Facebook, and whose name is on the patent for Facebook’s News Feed algorithm. It’s a major move when a guy like that, knowing what he knows, says, “The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.”

Hughes admits that the drive to break up Facebook won’t be easy. In the end, it may not even be successful. But it has to be attempted.

Too much power sits in the Zuckerberg’s hands. An attempt has to be made to break down the walls behind which our private data is being manipulated. We cannot trust Facebook — or Mark Zuckerberg — to do the right thing with the data. It would be so much easier if we could, but it has been proven again and again and again that our trust is misplaced.

The very fact that those calling the shots at Facebook believe you’ll fall for yet another public appeal wrapped in some altruistic bullshit appeal about protecting “free” that’s as substantial as Saran Wrap should be taken as an insult. It should make you mad as hell.

And it should put Apple’s stand to protect your privacy in the right perspective: a long overdue attempt to stop the runaway train that is social media.