The Privacy War Has Begun

It started innocently enough….

My iPhone just upgraded itself to iOS 14.6, and the privacy protection purge began.

In late April,  Apple added App Tracking Transparency (ATT) to iOS (actually in 14.5 but for reasons mentioned in this Forbes article, I hadn’t noticed the change until the most recent update). Now, whenever I launch an app that is part of the online ad ecosystem, I’m asked whether I want to share data to enable tracking. I always opt out.

These alerts have been generally benign. They reference benefits like “more relevant ads,” a “customized experience” and “helping to support us.” Some assume you’re opting in and opting out is a much more circuitous and time-consuming process. Most also avoid the words “tracking” and “privacy.” One referred to it in these terms: “Would you allow us to refer to your activity?”

My answer is always no. Why would I want to customize an annoyance and make it more relevant?

All in all, it’s a deceptively innocent wrapper to put on what will prove to be a cataclysmic event in the world of online advertising. No wonder Facebook is fighting it tooth and nail, as I noted in a recent post.

This shot across the bow of online advertising marks an important turning point for privacy. It’s the first time that someone has put users ahead of advertisers. Everything up to now has been lip service from the likes of Facebook, telling us we have complete control over our privacy while knowing that actually protecting that privacy would be so time-consuming and convoluted that the vast majority of us would do nothing, thus keeping its profitability flowing through the pipeline.

The simple fact of the matter is that without its ability to micro-target, online advertising just isn’t that effective. Take away the personal data, and online ads are pretty non-engaging. Also, given our continually improving ability to filter out anything that’s not directly relevant to whatever we’re doing at the time, these ads are very easy to ignore.

Advertisers need that personal data to stand any chance of piercing our non-attentiveness long enough to get a conversion. It’s always been a crapshoot, but Apple’s ATT just stacked the odds very much against the advertiser.

It’s about time. Facebook and online ad platforms have had little to no real pushback against the creeping invasion of our privacy for years now. We have no idea how extensive and invasive this tracking has been. The only inkling we get is when the targeting nails the ad delivery so well that we swear our phone is listening to our conversations. And, in a way, it is. We are constantly under surveillance.

In addition to Facebook’s histrionic bitching about Apple’s ATT, others have started to find workarounds, as reported on 9 to 5 Mac. ATT specifically targets the IDFA (Identified for Advertisers), which offers cross app tracking by a unique identifier. Chinese ad networks backed by the state-endorsed Chinese Advertising Association were encouraging the adoption of CAID identifiers as an alternative to IDFA. Apple has gone on record as saying ATT will be globally implemented and enforced. While CAID can’t be policed at the OS level, Apple has said that apps that track users without their consent by any means, including CAID, could be removed from the App Store.

We’ll see. Apple doesn’t have a very consistent track record with it comes to holding the line against Chinese app providers. WeChat, for one, has been granted exceptions to Apple’s developer restrictions that have not been extended to anyone else.

For its part, Google has taken a tentative step toward following Apple’s lead with its new privacy initiative on Android devices, as reported in Slash Gear. Google Play has asked developers to share what data they collect and how they use that data. At this point, they won’t be requiring opt-in prompts as Apple does.

All of this marks a beginning. If it continues, it will throw a Kong-sized monkey wrench into the works of online advertising. The entire ecosystem is built on ad-supported models that depend on collecting and storing user data. Apple has begun nibbling away at that foundation.

The toppling has begun.

Facebook Vs. Apple Vs. Your Privacy

As I was writing last week’s words about Mark Zuckerberg’s hubris-driven view of world domination, little did I know that the next chapter was literally being written. The very next day, a full-page ad from Facebook ran in The New York Times, The Washington Post and The Wall Street Journal attacking Apple for building privacy protection prompts into iOS 14.

It will come as a surprise to no one that I line up firmly on the side of Apple in this cat fight. I have always said we need to retain control over our personal data, choosing what’s shared and when. I also believe we need to have more control over the nature of the data being shared. iOS 14 is taking some much-needed steps in that direction.

Facebook is taking a stand that sadly underlines everything I wrote just last week — a disingenuous stand for a free-market environment — by unfurling the “Save small business” banner. Zuckerberg loves to stand up for “free” things — be it speech or markets — when it serves his purpose.

And the hidden agenda here is not really hidden at all. It’s not the small business around the corner Mark is worried about. It’s the 800-billion-dollar business that he owns 60% of the voting shares in.

The headline of the ad reads, “We’re standing up to Apple for small businesses everywhere.”

Ummm — yeah, right.

What you’re standing up for, Mark, is your revenue model, which depends on Facebook’s being free to hoover up as much personal data on you as possible, across as many platforms as possible.

The only thing that you care about when it comes to small businesses is that they spend as much with Facebook as possible. What you’re trying to defend is not “free” markets or “free” speech. What you’re defending is about the furthest thing imaginable away from  “free.”  It’s $70 billion plus in revenues and $18 and a half billion in profits. What you’re trying to protect is your number-five slot on the Forbes richest people in the world list, with your net worth of $100 billion.

Then, on the very next day, Facebook added insult to injury with a second ad, this time defending the “Free Internet,”  saying Apple “will change the internet as we know it” by forcing websites and blogs “to start charging you subscription fees.”

Good. The “internet as we know it” is a crap sandwich. “Free” has led us to exactly where we are now, with democracy hanging on by a thread, with true journalism in the last paroxysms of its battle for survival, and with anyone with half a brain feeling like they’re swimming in a sea of stupidity.

Bravo to Apple for pushing us away from the toxicity of “free” that comes with our enthralled reverence for “free” things to prop up a rapidly disintegrating information marketplace. If we accept a free model for our access to information, we must also accept advertising that will become increasingly intrusive, with even less regard for our personal privacy. We must accept all the things that come with “free”: the things that have proven to be so detrimental to our ability to function as a caring and compassionate democratic society over the past decade.

In doing the research for this column, I ran into an op-ed piece that ran last year in The New York Times. In it, Facebook co-founder Chris Hughes lays out the case for antitrust regulators dismantling Facebook’s dominance in social media.

This is a guy who was one of Zuckerberg’s best friends in college, who shared in the thrill of starting Facebook, and whose name is on the patent for Facebook’s News Feed algorithm. It’s a major move when a guy like that, knowing what he knows, says, “The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.”

Hughes admits that the drive to break up Facebook won’t be easy. In the end, it may not even be successful. But it has to be attempted.

Too much power sits in the Zuckerberg’s hands. An attempt has to be made to break down the walls behind which our private data is being manipulated. We cannot trust Facebook — or Mark Zuckerberg — to do the right thing with the data. It would be so much easier if we could, but it has been proven again and again and again that our trust is misplaced.

The very fact that those calling the shots at Facebook believe you’ll fall for yet another public appeal wrapped in some altruistic bullshit appeal about protecting “free” that’s as substantial as Saran Wrap should be taken as an insult. It should make you mad as hell.

And it should put Apple’s stand to protect your privacy in the right perspective: a long overdue attempt to stop the runaway train that is social media.

Have More People Become More Awful?

Is it just me, or do people seem a little more awful lately? There seems to be a little more ignorance in the world, a little less compassion, a little more bullying and a lot less courtesy.

Maybe it’s just me.

It’s been a while since I’ve checked in with eternal optimist Steven Pinker.  The Harvard psychologist is probably the best-known proponent of the argument that the world is consistently trending towards being a better place.  According to Pinker, we are less bigoted, less homophobic, less misogynist and less violent. At least, that’s what he felt pre-COVID lockdown. As I said, I haven’t checked in with him lately, but I suspect he would say the long-term trends haven’t appreciably changed. Maybe we’re just going through a blip.

Why, then, does the world seem to be going to hell in a hand cart?  Why do people — at least some people — seem so awful?

I think it’s important to remember that our brain likes to play tricks on us. It’s in a never-ending quest to connect cause and effect. Sometimes, to do so, the brain jumps to conclusions. Unfortunately, it is aided in this unfortunate tendency by a couple of accomplices — namely news reporting and social media. Even if the world isn’t getting shittier, it certainly seems to be. 

Let me give you one example. In my local town, an anti-masking rally was recently held at a nearby shopping mall. Local news outlets jumped on it, with pictures and video of non-masked, non-socially distanced protesters carrying signs and chanting about our decline into Communism and how their rights were being violated.

What a bunch of boneheads — right? That was certainly the consensus in my social media circle. How could people care so little about the health and safety of their community? Why are they so awful?

But when you take the time to unpack this a bit, you realize that everyone is probably overplaying their hands. I don’t have exact numbers, but I don’t think there were more than 30 or 40 protestors at the rally. The population of my city is about 150,000. These protestors represented .03% of the total population. 

Let’s say for every person at the rally, there were 10 that felt the same way but weren’t there. That’s still less than 1%. Even if you multiplied the number of protesters by 100, it would still be just 3% of my community. We’re still talking about a tiny fraction of all the people who live in my city. 

But both the news media and my social media feed have ensured that these people are highly visible. And because they are, our brain likes to use that small and very visible sample and extrapolate it to the world in general. It’s called availability bias, a cognitive shortcut where the brain uses whatever’s easy to grab to create our understanding of the world.

But availability bias is nothing new. Our brains have always done this. So, what’s different about now?

Here, we have to understand that the current reality may be leading us into another “mind-trap.” A 2018 study from Harvard introduced something called “prevalence-induced concept change,” which gives us a better understanding of how the brain focuses on signals in a field of noise. 

Basically, when signals of bad things become less common, the brain works harder to find them. We expand our definition of what is “bad” to include more examples so we can feel more successful in finding them.

I’m probably stretching beyond the limits of the original study here, but could this same thing be happening now? Are we all super-attuned to any hint of what we see as antisocial behavior so we can jump on it? 

If this is the case, again social media is largely to blame. It’s another example of our current toxic mix of dog whistlecancel culturevirtue signaling, pseudo-reality that is being driven by social media. 

That’s two possible things that are happening. But if we add one more, it becomes a perfect storm of perceived awfulness. 

In a normal world, we all have different definitions of the ethical signals we’re paying attention to. What you are focused on right now in your balancing of what is right and wrong is probably different from what I’m currently focused on. I may be thinking about gun control while you’re thinking about reducing your carbon footprint.

But now, we’re all thinking about the same thing: surviving a pandemic. And this isn’t just some theoretical mind exercise. This is something that surrounds us, affecting us every single day. When it comes to this topic, our nerves have been rubbed raw and our patience has run out. 

Worst of all, we feel helpless. There seems to be nothing we can do to edge the world toward being a less awful place. Behaviors that in another reality and on another topic would have never crossed our radar now have us enraged. And, when we’re enraged, we do the one thing we can do: We share our rage on social media. Unfortunately, by doing so, we’re not part of the solution. We are just pouring fuel on the fire.

Yes, some people probably are awful. But are they more awful than they were this time last year? I don’t think so. I also can’t believe that the essential moral balance of our society has collectively nosedived in the last several months. 

What I do believe is that we are living in a time where we’re facing new challenges in how we perceive the world. Now, more than ever before, we’re on the lookout for what we believe to be awful. And if we’re looking for it, we’re sure to find it.

The Day My Facebook Bubble Popped

I learned this past week just how ideologically homogenous my Facebook bubble usually is. Politically, I lean left of center. Almost all the people in my bubble are the same.

Said bubble has been built from the people I have met in the past 40 years or so. Most of these people are in marketing, digital media or tech. I seldom see anything in my feed I don’t agree with — at least to some extent.

But before all that, I grew up in a small town in a very right-wing part of Alberta, Canada. Last summer, I went to my 40-year high-school reunion. Many of my fellow graduates stayed close to our hometown for those 40 years. Some are farmers. Many work in the oil and gas industry. Most of them would fall somewhere to the right of where I sit in my beliefs and political leanings.

At the reunion, we did what people do at such things — we reconnected. Which in today’s world meant we friended each other on Facebook. What I didn’t realize at the time is that I had started a sort of sociological experiment. I had poked a conservative pin into my liberal social media bubble.

Soon, I started to see posts that were definitely coming from outside my typical bubble. But most of them fell into the “we can agree to disagree” camp of political debate. My new Facebook friends and I might not see eye-to-eye on certain things, but hell — you are good people, I’m good people, we can all live together in this big ideological tent.

On May 1, 2020, things began to change. That was when Canadian Prime Minister Justin Trudeau announced that 1,500 models of “assault-style” weapons would be classified as prohibited, effective immediately. This came after Gabriel Wortman killed 22 people in Nova Scotia, making it Canada’s deadliest shooting spree. Now, suddenly posts I didn’t politically agree with were hitting a very sensitive raw nerve. Still, I kept my mouth shut. I believed arguing on Facebook was pointless.

Through everything that’s happened in the four months since (it seems like four decades), I have resisted commenting when I see posts I don’t agree with. I know how pointless it is. I realize that I am never going to change anyone’s mind through a comment on a Facebook post.

I understand this is just an expression of free speech, and we are all constitutionally entitled to exercise it. I stuck with the Facebook rule I imposed for myself — keep scrolling and bite your tongue. Don’t engage.

I broke that rule last week. One particular post did it. This post implied that with a COVID-19 survival rate of almost 100%, why did we need a vaccine? I knew better, but I couldn’t help it.

I engaged. It was limited engagement to begin with. I posted a quick comment suggesting that with 800,000 (and counting) already gone, saving hundreds of thousands of lives might be a pretty good reason. Right or left, I couldn’t fathom anyone arguing with that.

I was wrong. Oh my God, was I wrong. My little comment unleashed a social media shit storm. Anti-vaxxing screeds, mind-control plots via China, government conspiracies to artificially over-count the death toll and calling out the sheer stupidity of people wearing face masks proliferated in the comment string for the next five days. I watched the comment string grow in stunned disbelief. I had never seen this side of Facebook before.

Or had I? Perhaps the left-leaning posts I am used are just as conspiratorial, but I don’t realize it because I happen to agree with them. I hope not, but perspective does strange things to our grasp of the things we believe to be true. Are we all — right or left — just exercising our right to free speech through a new platform? And — if we are — who am I to object?

Free speech is held up by Mark Zuckerberg and others as hallowed ground in the social-media universe. In a speech last fall at Georgetown University, Zuckerberg said: “The ability to speak freely has been central in the fight for democracy worldwide.”

It’s hard to argue that. The ability to publicly disagree with the government or any other holder of power over you is much better than any alternative. And the drafters of the U.S. Bill of Rights agreed. Freedom of speech was enshrined in the First Amendment. But the authors of that amendment — perhaps presciently — never defined exactly what constituted free speech. Maybe they knew it would be a moving target.

Over the history of the First Amendment, it has been left to the courts to decide what the exceptions would be.

In general, it has tightened the definitions around one area — what types of expression constitute a “clear and present danger” to others.  Currently, unless you’re specifically asking someone to break the law in the very near future, you’re protected under the First Amendment.

But is there a bigger picture here —one very specific to social media? Yes, legally in the U.S. (or Canada), you can post almost anything on Facebook.

Certainly, taking a stand against face masks and vaccines would qualify as free speech. But it’s not only the law that keeps society functioning. Most of the credit for that falls to social norms.

Social norms are the unwritten laws that govern much of our behavior. They are the “soft guard rails” of society that nudge us back on track when we veer off-course. They rely on us conforming to behaviors accepted by the majority.

If you agree with social norms, there is little nudging required. But if you happen to disagree with them, your willingness to follow them depends on how many others also disagree with them.

Famed sociologist Mark Granovetter showed in his Threshold Models of Collective Behavior that there can be tipping points in groups. If there are enough people who disagree with a social norm, it will create a cascade that can lead to a revolt against the norm.

Prior to social media, the thresholds for this type of behavior were quite high. Even if some of us were quick to act anti-socially, we were generally acting alone.

Most of us felt we needed a substantial number of like-minded people before we were willing to upend a social norm. And when our groups were determined geographically and comprised of ideologically diverse members, this was generally sufficient to keep things on track.

But your social-media feed dramatically lowers this threshold.

Suddenly, all you see are supporting posts of like-minded people. It seems that everyone agrees with you. Emboldened, you are more likely to go against social norms.

The problem here is that social norms are generally there because they are in the best interests of the majority of the people in society. If you go against them, by refusing a vaccine or to wear a face mask,  thereby allowing a disease to spread, you endanger others. Perhaps it doesn’t meet the legal definition of “imminent lawlessness,” but it does present a “clear and present danger.”

That’s a long explanation of why I broke my rule about arguing on Facebook.

Did I change anyone’s mind? No. But I did notice that the person who made the original post has changed their settings, so I don’t see the political ones anymore. I just see posts about grandkids and puppies.

Maybe it’s better that way.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

This Election, Canucks were “Zucked”

Note: I originally wrote this before results were available. Today, we know Trudeau’s Liberals won a minority government, but the Conservatives actually won the popular vote: 34.4% vs 33.06% for the Liberals. It was a very close election.

As I write this, Canadians are going to the polls in our national election. When you read this, the outcome will have been decided. I won’t predict — because this one is going to be too close to call.

For a nation that is often satirized for our tendencies to be nice and polite, this has been a very nasty campaign. So nasty, in fact, that in focusing on scandals and personal attacks, it forgot to mention the issues.

Most of us are going to the polls today without an inkling of who stands for what. We’re basically voting for the candidate we hate the least. In other words, we’re using the same decision strategy we used to pick the last guest at our grade 6 birthday party.

The devolvement of democracy has now hit the Great White North, thanks to Facebook and Mark Zuckerberg.

While the amount of viral vitriol I have seen here is still a pale shadow of what I saw from south of the 49th in 2016, it’s still jarring to witness. Canucks have been “Zucked.” We’re so busy slinging mud that we’ve forgotten to care about the things that are essential to our well being as a nation.

It should come as news to no one that Facebook has been wantonly trampling the tenets of democracy. Elizabeth Warren recently ran a fake ad on Facebook just to show she could. Then Mark Zuckerberg defended Facebook last week when he said: “While I certainly worry about an erosion of truth, I worry about living in a world where you can only post things that tech companies decide to be 100 per cent true.”

Zuckerberg believes the onus lies with the Facebook user to be able to judge what is false and what is not. This is a suspiciously convenient defense of Facebook’s revenue model wrapped up as a defense of freedom of speech. At best it’s naïve, not to mention hypocritical. What we see is determined by Facebook’s algorithm. At worst it’s misleading and malicious.

Hitting hot buttons tied to emotions is nothing new in politics. Campaign runners have been drawing out and sharpening the long knives for decades now. TV ads added a particularly effective weapon into the political arsenal. In the 1964 presidential campaign, it even went nuclear with Lyndon Johnson’s famous “Daisy” Ad.

But this is different. For many reasons.

First of all, there is the question of trust in the channel. We have been raised in a world where media channels historically take some responsibility to delineate between what they say is factual (i.e., the news) and what is paid persuasion (i.e., the ads).

In his statement, Zuckerberg is essentially telling us that giving us some baseline of trust in political advertising is not Facebook’s job and not their problem. We should know better.

But we don’t. It’s a remarkably condescending and convenient excuse for Zuckerberg to appear to be telling us “You should be smarter than this” when he knows that this messaging has little to do with our intellectual horsepower.

This is messaging that is painstakingly designed to be mentally processed before the rational part of our brain even kicks in.

In a recent survey, three out of four Canadians said they had trouble telling which social media accounts were fake. And 40% of Canadians say they had found links to stories on current affairs that were obviously false. Those were only the links they knew were fake. I assume that many more snuck through their factual filters. By the way, people of my generation are the worst at sniffing out fake news.

We’ve all seen it, but only one third of Canadians 55 and over realize it. We can’t all be stupid.

Because social media runs on open platforms, with very few checks and balances, it’s wide open for abuse. Fake accounts, bots, hacks and other digital detritus litter the online landscape. There has been little effective policing of this. The issue is that cracking down on this directly impacts the bottom line. As Upton Sinclair said: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Even given these two gaping vulnerabilities, the biggest shift when we think of social media as an ad platform is that it is built on the complexity of a network. The things that come with this — things like virality, filter bubbles, threshold effects — have no corresponding rule book to play by. It’s like playing poker with a deck full of wild cards.

Now — let’s talk about targeting.

When you take all of the above and then factor in the data-driven targeting that is now possible, you light the fuse on the bomb nestled beneath our democratic platforms. You can now segment out the most vulnerable, gullible, volatile sectors of the electorate. You can feed them misinformation and prod them to action. You can then sit back and watch as the network effects play themselves out. Fan — meet shit. Shit — meet fan.

It is this that Facebook has wrought, and then Mark Zuckerberg feeds us some holier-than-thou line about freedom of speech.

Mark, I worry about living in a world where false — and malicious — information can be widely disseminated because a tech company makes a profit from it.

Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.

This is Why We Can’t Have Nice Things

Relevance is the new gold standard in marketing. In an  article in the Harvard Business Review written last year, John Zealley, Robert Wollan and Joshua Bellin — three senior execs at Accenture — outline five stages of marketing (paraphrased courtesy of a post from Phillip Nones):

  1. Mass marketing (up through the 1970s) – The era of mass production, scale and distribution.Marketing segmentation (1980s) – More sophisticated research enabling marketers to target customers in niche segments.
  2. Customer-level marketing (1990s and 2000s) – Advances in enterprise IT make it possible to target individuals and aim to maximize customer lifetime value.
  3. Loyalty marketing (2010s) – The era of CRM, tailored incentives and advanced customer retention.
  4. Relevance marketing (emerging) – Mass communication to the previously unattainable “Segment of One.”

This last stage – according to marketers past and present – should be the golden era of marketing:

“The perfect advertisement is one of which the reader can say, ‘This is for me, and me alone.” 

— Peter Drucker

“Audiences crave tailored messages that cater to them specifically and they are willing to offer information that enables marketers to do so.”

 Kevin Tash, CEO of Tack Media, a digital marketing agency in Los Angeles.

Umm…no! In fact, hell, no!

I agree that relevance is an important thing. And in an ethical world, the exchange Tash talks about would be a good thing, for both consumers and marketers. But we don’t live in such a world. The world we live in has companies like Facebook and Cambridge Analytica.

Stop Thinking Like a Marketer!

There is a cognitive whiplash that happens when our perspective changes from that of marketer to that of a consumer. I’ve seen it many times. I’ve even prompted it on occasion. But to watch it in 113 minutes of excruciating detail, you should catch “The Great Hack” on Netflix. 

The documentary is a journalistic peeling of the onion that is the Cambridge Analytica scandal. It was kicked off by the whistle blowing of Christopher Wylie, a contract programmer who enjoyed his 15 minutes of fame. But to me, the far more interesting story is that of Brittany Kaiser, the director of business Development of SCL Group, the parent company of Cambridge Analytica. The documentary digs into the tortured shift of perspective as she transitions from thinking like a marketer to a citizen who has just had her private data violated. It makes for compelling viewing.

Kaiser shifted her ideological compass about as far as one could possibly do, from her beginnings as an idealistic intern for Barack Obama and a lobbyist for Amnesty International to one of the chief architects of the campaigns supporting Trump’s presidential run, Brexit and other far right persuasion blitzkriegs. At one point, she justifies her shift to the right by revealing her family’s financial struggle and the fact that you don’t get paid much as an underling for Democrats or as a moral lobbyist. The big bucks are found in the ethically grey areas.  Throughout the documentary, she vacillates between the outrage of a private citizen and the rationalization of a marketer. She is a woman torn between two conflicting perspectives.

We marketers have to stop kidding ourselves and justifying misuse of personal data with statements like the one previously quoted from Kevin Tash. As people, we’re okay. I like most of the marketers I know. But as professional marketers, we have a pretty shitty track record. We trample privacy, we pry into places we shouldn’t and we gleefully high-five ourselves when we deliver the goods on a campaign — no matter who that campaign might be for and what its goals might be. We are very different people when we’re on the clock.

We are now faced with what may be the most important questions of our lives: How do we manage our personal data? Who owns it? Who stores it? Who has the right to use it? When we answer those questions, let’s do it as people, and not marketers. Because there is a lot more at stake here than the ROI rates on a marketing campaign.

Why Are So Many Companies So Horrible At Responding To Emails?

I love email. I hate 62.4% of the people I email.

Sorry. That’s not quite right. I hate 62.4% of the people I email in the futile expectation of a response…sometime…in the next decade or so (I will get back to the specificity of the 62.4% shortly).  It’s you who suck.

You know who you are. You are the ones who never respond to emails, who force me to send email after email with an escalating tone of prickliness, imploring you to take a few seconds from whatever herculean tasks fill your day to actually acknowledge my existence.

It’s you who force me to continually set aside whatever I’m working on to prod you into doing your damned job! And — often — it is you who causes me to eventually abandon email in exasperation and then sink further into the 7thcircle of customer service hell:  voicemail.

Why am I (and trust me, I’m not alone) so exasperated with you? Allow me to explain.

From our side, when we send an email, we are making a psychological statement about how we expect this communication channel to proceed. We have picked this channel deliberately. It is the right match for the mental prioritization we have given this task.

In 1891, in a speech on his 70th birthday, German scientist Hermann Von Helmholtz explained how ideas came to him  He identified four stages that were later labeled by social psychologist Graham Wallas: Preparation, Incubation, Illumination and Verification. These stages have held up remarkably well against the findings of modern neuroscience. Each of these stages has a distinct cognitive pattern and its own set of communication expectations.

  1. Preparation
    Preparation is gathering the information required for our later decision-making. We are actively foraging, looking for gaps in our current understanding of the situation and tracking down sources of that missing information. Our brains are actively involved in the task, but we also have a realistic expectation of the timeline required. This is the perfect match for email as a channel. We’ll came back to our expectations at this stage in a moment, as it’s key to understanding what a reasonable response time is.
  2. Incubation
    Once we have the information we require, our brain often moves the problem to the back burner. Even though it’s not “top of mind,” this doesn’t mean the brain isn’t still mulling it over. It’s the processing that happens while we’re sleeping or taking a walk. Because the brain isn’t actively working on the problem, there is no real communication needed.
  3. Illumination
    This is the eureka moment. You literally “make up your mind”: the cognitive stars align and you settle on a decision. You are now ready to take action. Again, at this stage, there is little to no outside communication needed.
  4. Verification
    Even though we’ve “made up our mind,” there is still one more step before action. We need to make sure our decision matches what is feasible in the real world. Does our internal reality match the external one? Again, our brains are actively involved, pushing us forward. Again, there is often some type of communication required here.

What we have here — in intelligence terms — is a sensemaking loop. The brain ideally wants this loop to continue smoothly, without interruption. But at two of the stages — the beginning and end — our brain needs to idle, waiting for input from the outside world.

Brains that have put tasks on idle do one of two things: They forget, or they get irritated. There are no other options.

The only variance is the degree of irritation. If the task is not that important to us, we get mildly irritated. The more important the task and the longer we are forced to put it on hold, the more frustrated we get.

Next, let’s talk about expectations. At the Preparation phase, we realize the entire world does not march to the beat of our internal drummer. Using email is our way to accommodate the collective schedules of the world. We are not demanding an immediate response. If we did, we’d use another channel, like a phone or instant messaging. When we use email, we expect those on the receiving end to fit our requirements into their priorities.

A recent survey by Jeff Toister, a customer service consultant, found that 87% of respondents expect a response to their emails within one day. Half of those expect a response in four hours or less. The most demanding are baby boomers — probably because email is still our preferred communication channel.

What we do not expect is for our emails to be completely ignored. Forever.

Yet, according to a recent benchmark study by SuperOffice, that is exactly what happens. 62.4% of businesses contacted with a customer service question in the study never responded. 90.5% never acknowledged receiving an email.  They effectively said to those customers, “Either forget us or get pissed off at us. We don’t really care.”

This lack of response is fine if you really don’t care. I toss a number of emails from my inbox daily without responding. They are a waste of my time. But if you have any expectation of having any type of relationship with the sender, take the time to hit the “reply” button.

There were some red flags that these non-responsive companies had in common. Typically, they could only be contacted through a web form on their site. I know I only fill these out if I have no other choice. If there is a direct email link, I always opt for that. These companies also tended to be smaller and didn’t use auto-responders to confirm a message had been received.

If this sounds like a rant, it is. One of my biggest frustrations is lack of email follow-up. I have found that the bar to surprise and delight me via your email response procedure is incredibly low:

  1. Respond.
  2. Don’t be a complete idiot.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.