The Day My Facebook Bubble Popped

I learned this past week just how ideologically homogenous my Facebook bubble usually is. Politically, I lean left of center. Almost all the people in my bubble are the same.

Said bubble has been built from the people I have met in the past 40 years or so. Most of these people are in marketing, digital media or tech. I seldom see anything in my feed I don’t agree with — at least to some extent.

But before all that, I grew up in a small town in a very right-wing part of Alberta, Canada. Last summer, I went to my 40-year high-school reunion. Many of my fellow graduates stayed close to our hometown for those 40 years. Some are farmers. Many work in the oil and gas industry. Most of them would fall somewhere to the right of where I sit in my beliefs and political leanings.

At the reunion, we did what people do at such things — we reconnected. Which in today’s world meant we friended each other on Facebook. What I didn’t realize at the time is that I had started a sort of sociological experiment. I had poked a conservative pin into my liberal social media bubble.

Soon, I started to see posts that were definitely coming from outside my typical bubble. But most of them fell into the “we can agree to disagree” camp of political debate. My new Facebook friends and I might not see eye-to-eye on certain things, but hell — you are good people, I’m good people, we can all live together in this big ideological tent.

On May 1, 2020, things began to change. That was when Canadian Prime Minister Justin Trudeau announced that 1,500 models of “assault-style” weapons would be classified as prohibited, effective immediately. This came after Gabriel Wortman killed 22 people in Nova Scotia, making it Canada’s deadliest shooting spree. Now, suddenly posts I didn’t politically agree with were hitting a very sensitive raw nerve. Still, I kept my mouth shut. I believed arguing on Facebook was pointless.

Through everything that’s happened in the four months since (it seems like four decades), I have resisted commenting when I see posts I don’t agree with. I know how pointless it is. I realize that I am never going to change anyone’s mind through a comment on a Facebook post.

I understand this is just an expression of free speech, and we are all constitutionally entitled to exercise it. I stuck with the Facebook rule I imposed for myself — keep scrolling and bite your tongue. Don’t engage.

I broke that rule last week. One particular post did it. This post implied that with a COVID-19 survival rate of almost 100%, why did we need a vaccine? I knew better, but I couldn’t help it.

I engaged. It was limited engagement to begin with. I posted a quick comment suggesting that with 800,000 (and counting) already gone, saving hundreds of thousands of lives might be a pretty good reason. Right or left, I couldn’t fathom anyone arguing with that.

I was wrong. Oh my God, was I wrong. My little comment unleashed a social media shit storm. Anti-vaxxing screeds, mind-control plots via China, government conspiracies to artificially over-count the death toll and calling out the sheer stupidity of people wearing face masks proliferated in the comment string for the next five days. I watched the comment string grow in stunned disbelief. I had never seen this side of Facebook before.

Or had I? Perhaps the left-leaning posts I am used are just as conspiratorial, but I don’t realize it because I happen to agree with them. I hope not, but perspective does strange things to our grasp of the things we believe to be true. Are we all — right or left — just exercising our right to free speech through a new platform? And — if we are — who am I to object?

Free speech is held up by Mark Zuckerberg and others as hallowed ground in the social-media universe. In a speech last fall at Georgetown University, Zuckerberg said: “The ability to speak freely has been central in the fight for democracy worldwide.”

It’s hard to argue that. The ability to publicly disagree with the government or any other holder of power over you is much better than any alternative. And the drafters of the U.S. Bill of Rights agreed. Freedom of speech was enshrined in the First Amendment. But the authors of that amendment — perhaps presciently — never defined exactly what constituted free speech. Maybe they knew it would be a moving target.

Over the history of the First Amendment, it has been left to the courts to decide what the exceptions would be.

In general, it has tightened the definitions around one area — what types of expression constitute a “clear and present danger” to others.  Currently, unless you’re specifically asking someone to break the law in the very near future, you’re protected under the First Amendment.

But is there a bigger picture here —one very specific to social media? Yes, legally in the U.S. (or Canada), you can post almost anything on Facebook.

Certainly, taking a stand against face masks and vaccines would qualify as free speech. But it’s not only the law that keeps society functioning. Most of the credit for that falls to social norms.

Social norms are the unwritten laws that govern much of our behavior. They are the “soft guard rails” of society that nudge us back on track when we veer off-course. They rely on us conforming to behaviors accepted by the majority.

If you agree with social norms, there is little nudging required. But if you happen to disagree with them, your willingness to follow them depends on how many others also disagree with them.

Famed sociologist Mark Granovetter showed in his Threshold Models of Collective Behavior that there can be tipping points in groups. If there are enough people who disagree with a social norm, it will create a cascade that can lead to a revolt against the norm.

Prior to social media, the thresholds for this type of behavior were quite high. Even if some of us were quick to act anti-socially, we were generally acting alone.

Most of us felt we needed a substantial number of like-minded people before we were willing to upend a social norm. And when our groups were determined geographically and comprised of ideologically diverse members, this was generally sufficient to keep things on track.

But your social-media feed dramatically lowers this threshold.

Suddenly, all you see are supporting posts of like-minded people. It seems that everyone agrees with you. Emboldened, you are more likely to go against social norms.

The problem here is that social norms are generally there because they are in the best interests of the majority of the people in society. If you go against them, by refusing a vaccine or to wear a face mask,  thereby allowing a disease to spread, you endanger others. Perhaps it doesn’t meet the legal definition of “imminent lawlessness,” but it does present a “clear and present danger.”

That’s a long explanation of why I broke my rule about arguing on Facebook.

Did I change anyone’s mind? No. But I did notice that the person who made the original post has changed their settings, so I don’t see the political ones anymore. I just see posts about grandkids and puppies.

Maybe it’s better that way.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

This Election, Canucks were “Zucked”

Note: I originally wrote this before results were available. Today, we know Trudeau’s Liberals won a minority government, but the Conservatives actually won the popular vote: 34.4% vs 33.06% for the Liberals. It was a very close election.

As I write this, Canadians are going to the polls in our national election. When you read this, the outcome will have been decided. I won’t predict — because this one is going to be too close to call.

For a nation that is often satirized for our tendencies to be nice and polite, this has been a very nasty campaign. So nasty, in fact, that in focusing on scandals and personal attacks, it forgot to mention the issues.

Most of us are going to the polls today without an inkling of who stands for what. We’re basically voting for the candidate we hate the least. In other words, we’re using the same decision strategy we used to pick the last guest at our grade 6 birthday party.

The devolvement of democracy has now hit the Great White North, thanks to Facebook and Mark Zuckerberg.

While the amount of viral vitriol I have seen here is still a pale shadow of what I saw from south of the 49th in 2016, it’s still jarring to witness. Canucks have been “Zucked.” We’re so busy slinging mud that we’ve forgotten to care about the things that are essential to our well being as a nation.

It should come as news to no one that Facebook has been wantonly trampling the tenets of democracy. Elizabeth Warren recently ran a fake ad on Facebook just to show she could. Then Mark Zuckerberg defended Facebook last week when he said: “While I certainly worry about an erosion of truth, I worry about living in a world where you can only post things that tech companies decide to be 100 per cent true.”

Zuckerberg believes the onus lies with the Facebook user to be able to judge what is false and what is not. This is a suspiciously convenient defense of Facebook’s revenue model wrapped up as a defense of freedom of speech. At best it’s naïve, not to mention hypocritical. What we see is determined by Facebook’s algorithm. At worst it’s misleading and malicious.

Hitting hot buttons tied to emotions is nothing new in politics. Campaign runners have been drawing out and sharpening the long knives for decades now. TV ads added a particularly effective weapon into the political arsenal. In the 1964 presidential campaign, it even went nuclear with Lyndon Johnson’s famous “Daisy” Ad.

But this is different. For many reasons.

First of all, there is the question of trust in the channel. We have been raised in a world where media channels historically take some responsibility to delineate between what they say is factual (i.e., the news) and what is paid persuasion (i.e., the ads).

In his statement, Zuckerberg is essentially telling us that giving us some baseline of trust in political advertising is not Facebook’s job and not their problem. We should know better.

But we don’t. It’s a remarkably condescending and convenient excuse for Zuckerberg to appear to be telling us “You should be smarter than this” when he knows that this messaging has little to do with our intellectual horsepower.

This is messaging that is painstakingly designed to be mentally processed before the rational part of our brain even kicks in.

In a recent survey, three out of four Canadians said they had trouble telling which social media accounts were fake. And 40% of Canadians say they had found links to stories on current affairs that were obviously false. Those were only the links they knew were fake. I assume that many more snuck through their factual filters. By the way, people of my generation are the worst at sniffing out fake news.

We’ve all seen it, but only one third of Canadians 55 and over realize it. We can’t all be stupid.

Because social media runs on open platforms, with very few checks and balances, it’s wide open for abuse. Fake accounts, bots, hacks and other digital detritus litter the online landscape. There has been little effective policing of this. The issue is that cracking down on this directly impacts the bottom line. As Upton Sinclair said: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Even given these two gaping vulnerabilities, the biggest shift when we think of social media as an ad platform is that it is built on the complexity of a network. The things that come with this — things like virality, filter bubbles, threshold effects — have no corresponding rule book to play by. It’s like playing poker with a deck full of wild cards.

Now — let’s talk about targeting.

When you take all of the above and then factor in the data-driven targeting that is now possible, you light the fuse on the bomb nestled beneath our democratic platforms. You can now segment out the most vulnerable, gullible, volatile sectors of the electorate. You can feed them misinformation and prod them to action. You can then sit back and watch as the network effects play themselves out. Fan — meet shit. Shit — meet fan.

It is this that Facebook has wrought, and then Mark Zuckerberg feeds us some holier-than-thou line about freedom of speech.

Mark, I worry about living in a world where false — and malicious — information can be widely disseminated because a tech company makes a profit from it.

Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.

This is Why We Can’t Have Nice Things

Relevance is the new gold standard in marketing. In an  article in the Harvard Business Review written last year, John Zealley, Robert Wollan and Joshua Bellin — three senior execs at Accenture — outline five stages of marketing (paraphrased courtesy of a post from Phillip Nones):

  1. Mass marketing (up through the 1970s) – The era of mass production, scale and distribution.Marketing segmentation (1980s) – More sophisticated research enabling marketers to target customers in niche segments.
  2. Customer-level marketing (1990s and 2000s) – Advances in enterprise IT make it possible to target individuals and aim to maximize customer lifetime value.
  3. Loyalty marketing (2010s) – The era of CRM, tailored incentives and advanced customer retention.
  4. Relevance marketing (emerging) – Mass communication to the previously unattainable “Segment of One.”

This last stage – according to marketers past and present – should be the golden era of marketing:

“The perfect advertisement is one of which the reader can say, ‘This is for me, and me alone.” 

— Peter Drucker

“Audiences crave tailored messages that cater to them specifically and they are willing to offer information that enables marketers to do so.”

 Kevin Tash, CEO of Tack Media, a digital marketing agency in Los Angeles.

Umm…no! In fact, hell, no!

I agree that relevance is an important thing. And in an ethical world, the exchange Tash talks about would be a good thing, for both consumers and marketers. But we don’t live in such a world. The world we live in has companies like Facebook and Cambridge Analytica.

Stop Thinking Like a Marketer!

There is a cognitive whiplash that happens when our perspective changes from that of marketer to that of a consumer. I’ve seen it many times. I’ve even prompted it on occasion. But to watch it in 113 minutes of excruciating detail, you should catch “The Great Hack” on Netflix. 

The documentary is a journalistic peeling of the onion that is the Cambridge Analytica scandal. It was kicked off by the whistle blowing of Christopher Wylie, a contract programmer who enjoyed his 15 minutes of fame. But to me, the far more interesting story is that of Brittany Kaiser, the director of business Development of SCL Group, the parent company of Cambridge Analytica. The documentary digs into the tortured shift of perspective as she transitions from thinking like a marketer to a citizen who has just had her private data violated. It makes for compelling viewing.

Kaiser shifted her ideological compass about as far as one could possibly do, from her beginnings as an idealistic intern for Barack Obama and a lobbyist for Amnesty International to one of the chief architects of the campaigns supporting Trump’s presidential run, Brexit and other far right persuasion blitzkriegs. At one point, she justifies her shift to the right by revealing her family’s financial struggle and the fact that you don’t get paid much as an underling for Democrats or as a moral lobbyist. The big bucks are found in the ethically grey areas.  Throughout the documentary, she vacillates between the outrage of a private citizen and the rationalization of a marketer. She is a woman torn between two conflicting perspectives.

We marketers have to stop kidding ourselves and justifying misuse of personal data with statements like the one previously quoted from Kevin Tash. As people, we’re okay. I like most of the marketers I know. But as professional marketers, we have a pretty shitty track record. We trample privacy, we pry into places we shouldn’t and we gleefully high-five ourselves when we deliver the goods on a campaign — no matter who that campaign might be for and what its goals might be. We are very different people when we’re on the clock.

We are now faced with what may be the most important questions of our lives: How do we manage our personal data? Who owns it? Who stores it? Who has the right to use it? When we answer those questions, let’s do it as people, and not marketers. Because there is a lot more at stake here than the ROI rates on a marketing campaign.

Why Are So Many Companies So Horrible At Responding To Emails?

I love email. I hate 62.4% of the people I email.

Sorry. That’s not quite right. I hate 62.4% of the people I email in the futile expectation of a response…sometime…in the next decade or so (I will get back to the specificity of the 62.4% shortly).  It’s you who suck.

You know who you are. You are the ones who never respond to emails, who force me to send email after email with an escalating tone of prickliness, imploring you to take a few seconds from whatever herculean tasks fill your day to actually acknowledge my existence.

It’s you who force me to continually set aside whatever I’m working on to prod you into doing your damned job! And — often — it is you who causes me to eventually abandon email in exasperation and then sink further into the 7thcircle of customer service hell:  voicemail.

Why am I (and trust me, I’m not alone) so exasperated with you? Allow me to explain.

From our side, when we send an email, we are making a psychological statement about how we expect this communication channel to proceed. We have picked this channel deliberately. It is the right match for the mental prioritization we have given this task.

In 1891, in a speech on his 70th birthday, German scientist Hermann Von Helmholtz explained how ideas came to him  He identified four stages that were later labeled by social psychologist Graham Wallas: Preparation, Incubation, Illumination and Verification. These stages have held up remarkably well against the findings of modern neuroscience. Each of these stages has a distinct cognitive pattern and its own set of communication expectations.

  1. Preparation
    Preparation is gathering the information required for our later decision-making. We are actively foraging, looking for gaps in our current understanding of the situation and tracking down sources of that missing information. Our brains are actively involved in the task, but we also have a realistic expectation of the timeline required. This is the perfect match for email as a channel. We’ll came back to our expectations at this stage in a moment, as it’s key to understanding what a reasonable response time is.
  2. Incubation
    Once we have the information we require, our brain often moves the problem to the back burner. Even though it’s not “top of mind,” this doesn’t mean the brain isn’t still mulling it over. It’s the processing that happens while we’re sleeping or taking a walk. Because the brain isn’t actively working on the problem, there is no real communication needed.
  3. Illumination
    This is the eureka moment. You literally “make up your mind”: the cognitive stars align and you settle on a decision. You are now ready to take action. Again, at this stage, there is little to no outside communication needed.
  4. Verification
    Even though we’ve “made up our mind,” there is still one more step before action. We need to make sure our decision matches what is feasible in the real world. Does our internal reality match the external one? Again, our brains are actively involved, pushing us forward. Again, there is often some type of communication required here.

What we have here — in intelligence terms — is a sensemaking loop. The brain ideally wants this loop to continue smoothly, without interruption. But at two of the stages — the beginning and end — our brain needs to idle, waiting for input from the outside world.

Brains that have put tasks on idle do one of two things: They forget, or they get irritated. There are no other options.

The only variance is the degree of irritation. If the task is not that important to us, we get mildly irritated. The more important the task and the longer we are forced to put it on hold, the more frustrated we get.

Next, let’s talk about expectations. At the Preparation phase, we realize the entire world does not march to the beat of our internal drummer. Using email is our way to accommodate the collective schedules of the world. We are not demanding an immediate response. If we did, we’d use another channel, like a phone or instant messaging. When we use email, we expect those on the receiving end to fit our requirements into their priorities.

A recent survey by Jeff Toister, a customer service consultant, found that 87% of respondents expect a response to their emails within one day. Half of those expect a response in four hours or less. The most demanding are baby boomers — probably because email is still our preferred communication channel.

What we do not expect is for our emails to be completely ignored. Forever.

Yet, according to a recent benchmark study by SuperOffice, that is exactly what happens. 62.4% of businesses contacted with a customer service question in the study never responded. 90.5% never acknowledged receiving an email.  They effectively said to those customers, “Either forget us or get pissed off at us. We don’t really care.”

This lack of response is fine if you really don’t care. I toss a number of emails from my inbox daily without responding. They are a waste of my time. But if you have any expectation of having any type of relationship with the sender, take the time to hit the “reply” button.

There were some red flags that these non-responsive companies had in common. Typically, they could only be contacted through a web form on their site. I know I only fill these out if I have no other choice. If there is a direct email link, I always opt for that. These companies also tended to be smaller and didn’t use auto-responders to confirm a message had been received.

If this sounds like a rant, it is. One of my biggest frustrations is lack of email follow-up. I have found that the bar to surprise and delight me via your email response procedure is incredibly low:

  1. Respond.
  2. Don’t be a complete idiot.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

Influencer Marketing’s Downward Ethical Spiral

One of the impacts of our increasing rejection of advertising is that advertisers are becoming sneakier in presenting advertising that doesn’t look like advertising. One example is Native advertising. Another is influencer marketing. I’m not a big fan of either. I find native advertising mildly irritating. But I have bigger issues with influencer marketing.

Case in point: Taytum and Oakley Fisher. They’re identical twins, two years old and have 2.4 million followers on Instagram. They are adorable. They’re also expensive. A single branded photo on their feed goes for sums in the five-figure range. Of course, “they” are only two and have no idea what’s going on. This is all being stage managed behind the scenes by their parents, Madison and Kyler.

The Fishers are not an isolated example. According to an article on Fast Company, adorable kids – especially twins –  are a hot segment in the predicted 5 to 10 billion dollar Influencer market. Influencer management companies like God and Beauty are popping up. In a multi-billion dollar market, there are a lot of opportunities for everyone to make a quick buck. And the bucks get bigger when the “stars” can actually remember their lines. Here’s a quote from the Fast Company article:

“The Fishers say they still don’t get many brand deals yet, because the girls can’t really follow directions. Once they’re old enough to repeat what their parents (and the brands paying them) want, they could be making even more.”

Am I the only one that finds this carrying the whiff of moral repugnance?

If so, you might say, “what’s the harm?” The audience is obviously there. It works. Taytum and Oakley appear to be having fun, according to their identical grins. It’s just Gord being in a pissy mood again.

Perhaps. But I think there’s more going on here than we see on the typical Instagram feed.

One problem is transparency – or lack of it. Whether you agree with traditional advertising or not, at least it happens in a well-defined and well-lit marketplace. There is transparency into the fundamental exchange: consumer attention for dollars. It is an efficient and time-tested market.  There are metrics in place to measure the effectiveness of this exchange.

But when advertising attempts to present itself as something other than advertising, it slips from a black and white transaction to something lurking in the darkness colored in shades of grey. The whole point of influencer marketing is to make it appear that these people are genuine fans of these products, so much so that they can’t help evangelizing them through their social media feeds. This – of course – is bullshit. Money is paid for each one of these “genuine” tweets or posts. Big money. In some cases, hundreds of thousands of dollars. But that all happens out of sight and out of mind. It’s hidden, and that makes it an easy target for abuse.

But there is more than just a transactional transparency problem here. There is also a moral one. By becoming an influencer, you are actually becoming the influenced – allowing a brand to influence who you are, how you act, what you say and what you believe in. The influencer goes in believing that they are in control and the brand is just coming along for the ride. This is – again – bullshit. The minute you go on the payroll, you begin auctioning off your soul to the highest bidder. Amena Khan and Munroe Bergdorf both discovered this. The two influencers were cut for L’Oreal’s influencer roster by actually tweeting what they believed in.

The façade of influencer marketing is the biggest problem I have with it. It claims to be authentic and it’s about as authentic as pro wrestling – or Mickey Rourke’s face. Influencer marketing depends on creating an impossibly shiny bubble of your life filled with adorable families, exciting getaways, expensive shoes and the perfect soymilk latte. No real life can be lived under this kind of pressure. Influencer marketing claims to be inspirational, but it’s actually aspirational at the basest level. It relies on millions of us lusting after a life that is not real – a life where “all the women are strong, all the men are good-looking, and all the children are above average.”

Or – at least – all the children are named Taytum or Oakley.

 

Dear Facebook. It’s Not Me, It’s You

So, let’s say, hypothetically, one wanted to get break up with Facebook? Just how would one do that?

I heard one person say that swearing off Facebook was a “position of privilege.” It was an odd way of putting it, until I thought about it a bit. This person was right. Much as I’d like to follow in retired tech journalist Walter Mossberg’s footsteps and quit Facebook cold turkey, I don’t think I can. I am not in that position. I am not so privileged.

This is no way condones Facebook and its actions. I’m still pretty pissed off about that. I suspect I might well be in an abusive relationship. I have this suspicion because I looked it up on Mentalhealth.net, a website offered by the American Addictions Centers. According to them, an abusive relationship is

where one thing mistreats or misuses another thing. The important words in this definition are “mistreat” and “misuse”; they imply that there is a standard that describes how things should be treated and used, and that an abuser has violated that standard.

For the most part, only human beings are capable of being abusive, because only human beings are capable of understanding how things should be treated in the first place and then violating that standard anyway.”

That sounds bang on when I think about how Facebook has treated its users and their personal data. And everyone will tell you that if you’re in an unhealthy relationship, you should get out. But it’s not that easy. And that’s because of Metcalfe’s Law. Originally applied to telecommunication networks, it also applies to digitally mediated social networks. Metcalfe’s Law states that states that the value of a telecommunications network is proportional to the square of the number of connected users of the system.”

The example often used is a telephone. If you’re the only person with one, it’s useless. If everyone has one, it’s invaluable. Facebook has about 2.3 billion users worldwide. That’s one out of every three people on this planet. Do the math. That’s a ton of value. It makes Facebook what they call very “sticky” in Silicon Valley.

But it’s not just the number of users that makes Facebook valuable. It’s also the way they use it. Facebook has always intended to become the de facto platform for broad based social connection. As such, it is built of “weak ties” – those social bonds defined by Mark Granovetter almost 50 years ago which connect scattered nodes in a network. To go back to the afore-mentioned “position of privilege” comment, the privilege in this case is a lack of dependence on weak ties.

 

My kids could probably quite Facebook. At least, it would be easier for them then it would be for me. But they also are not in the stage of their life where weak ties are all that important. They use other platforms, like Snapchat, to communicate with their friends. It’s a channel built for strong ties. If they do need to bridge weak ties, they escalate their social postings, first to Instagram, then – finally – to their last resort: Facebook. It’s only through Facebook where they’ll reach parents, aunts, cousins and grandmas all at once.

It’s different for me. I have a lifetime of accumulated weak ties that I need to connect with all the time. And Facebook is the best way to do it. I connect with various groups, relatives, acquaintances and colleagues on an as needed basis.  I also need a Facebook presence for my business, because it’s expected by others that need to connect to me. I don’t have the privilege of severing those ties.

So, I’ve decided that I can’t quit Facebook. At least, not yet. But I can use Facebook differently – more impersonally. I can use it as a connection platform rather than a channel for personal expression. I can make sure as little of my personal data falls into Facebook’s hands as possible. I don’t need to post what I like, how I’m feeling, what my beliefs are or what I do daily. I can close myself off to Facebook, turning this into a passionless relationship. From now on, I’ll consider it a tool –  not a friend, not a confidante, not something I can trust – just a way to connect when I need to. My personal life is none of Facebook’s business – literally.

For me, it’s the first step in preventing more abuse.

Who Should (or Could) Protect Our Data?

Last week, when I talked about the current furor around the Cambridge Analytica scandal, I said that part of the blame – or at least, the responsibility – for the protection of our own data belonged to us. Reader Chuck Lantz responded with:

“In short, just because a company such as FaceBook can do something doesn’t mean they should.  We trusted FaceBook and they took advantage of that trust. Not being more careful with our own personal info, while not very wise, is not a crime. And attempting to dole out blame to both victim and perpetrator ain’t exactly wise, either.”

Whether it’s wise or not, when it comes to our own data, there are only three places we can reasonably look to protect it:

A) The Government

One only has to look at the supposed “grilling” of Zuckerberg by Congress to realize how forlorn a hope this is. In a follow up post, Wharton ran a list of the questions that Congress should have asked, compiled from their own faculty. My personal favorite comes from Eric Clemons, professor of Operations, Information and Decisions:

“You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?”

Technology has left the regulatory ability to control it in the dust. The EU is probably the most aggressive legislative jurisdiction in the world when it comes to protecting data privacy. The General Data Protection Regulation goes into place on May 25 of this year and incorporates sweeping new protections for EU citizens. But it will inevitably come up short in three key areas:

  • Even though it immediately applies to all countries processing the data of EU citizens, international compliance will be difficult to enforce consistently, especially if that processing extends beyond “friendly” countries.
  • Technological “loopholes” will quickly find vulnerable gray areas in the legislation that will lead to the misuse of data. Technology will always move faster than legislation. As an example, the GDPR and blockchain technologies are seemingly on a collision course.
  • Most importantly, the GDPR regulation is aimed at data “worst case scenarios.” But there are many apparently benign applications that can border on misuse of personal data. In trying to police even the worst-case instances, the GDPR requires restrictions that will directly impact users in the area of convenience and functionality. There are key areas such as data portability that aren’t fully addressed in the new legislation. At the end of the day, even though it’s protecting them, users will find the GDPR a pain in the ass.

Even with these fundamental flaws, the GDPR probably represents the world’s best attempt at data regulation. The US, as we’ve seen in the past week, comes up well short of this. And even if the people involved weren’t doddering old technologically inept farts the mechanisms required for the passing of relevant and timely legislation simply aren’t there. It would be like trying to catch a jet with a lasso. Should this be the job of government? Sure, I can buy that. Can government handle the job? Not based on the evidence we currently have available to us.

B) The companies that aggregate and manipulate our data.

Philosophically, I completely agree with Chuck. Like I said last week – the point of view I took left me ill at ease. We need these companies to be better than they are. We certainly need them to be better than Facebook was. But Facebook has absolutely no incentive to be better. And my fellow Media Insider, Kaila Colbin, nailed this in her column last week:

“Facebook doesn’t benefit if you feel better about yourself, or if you’re a more informed, thoughtful person. It benefits if you spend more time on its site, and buy more stuff. Giving the users control over who sees their posts offers the illusion of individual agency while protecting the prime directive.”

There are no inherent, proximate reasons for companies to be moral. They are built to be profitable (which, by the way, is why governments should never be run like a company). Facebook’s revenue model is directly opposed to personal protection of data. And that is why Facebook will try to weather this storm by implementing more self-directed security controls to put a good face on things. We will ignore those controls, because it’s a pain in the ass to do otherwise. And this scenario will continue to play out again and again.

C) Ourselves.

It sucks that we have to take this into our own hands. But I don’t see an option. Unless you see something in the first two alternatives that I don’t see, I don’t think we have any choice but to take responsibility. Do you want to put your security in the hands of the government, or Facebook? The first doesn’t have the horsepower to do the job and the second is heading in the wrong direction.

So if the responsibility ends up being ours, what can we expect?

A few weeks ago, another fellow Insider, Dave Morgan, predicted the moats around the walled gardens of data collectors like Facebook will get deeper. But the walled garden approach is not sustainable in the long run. All the market forces are going against it. As markets mature, they move from siloes to open markets. The marketplace of data will head in the same direction. Protectionist measures may be implemented in the short term, but they will not be successful.

This doesn’t negate the fact that the protection of personal information has suddenly become a massive pain point, which makes it huge market opportunity. And like almost all truly meaningful disruptions in the marketplace, I believe the ability to lock down our own data will come from entrepreneurialism. We need a solution that guarantees universal data portability while at the same time maintaining control without putting an unrealistic maintenance burden on us. Rather than having the various walled gardens warehouse our data, we should retain ownership and it should only be offered to platforms like Facebook on a case-by-case “need to know” transactional basis. Will it be disruptive to the current social eco-system? Absolutely. And that’s a good thing.

The targeting of advertising is not a viable business model for the intertwined worlds of social connection and personal functionality. There is just too much at stake here. The only way it can work is for the organization doing the targeting to retain ownership of the data used for the targeting. And we should not trust them to do so in an ethical manner. Their profitability depends on them going beyond what is – or should be – acceptable to us.