Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

Looking Back at a Decade That’s 99.44% Done

Remember 2010? For me that was a pretty important year. It was the year I sold my digital marketing business. While I would continue to actively work in the industry for another 3 years, for me things were never the same as they were in 2010. And – looking back – I realize that’s pretty well true for most of us. We were more innocent and more hopeful. We still believed that the Internet would be the solution, not the problem.

In 2010, two big trends were jointly reshaping our notions of being connected. Early in the year, former Morgan Stanley analyst Mary Meeker laid them out for us in her “State of the Internet” report. Back then, just three years after the introduction of the iPhone, internet usage from mobile devices hadn’t even reached double digits as a percentage of overall traffic. Meeker knew this was going to change, and quickly. She saw mobile adoption on track to be the steepest tech adoption curve in history. She was right. Today, over 60% of internet usage comes from a mobile device.

The other defining trend was social media. Even then, Facebook had about 600 million users, or just under 10% of the world’s population. When you had a platform that big – connecting that many people – you just knew the consequences will be significant. There were some pretty rosy predications for the impact of social media.

Of course, it’s the stuff you can’t predict that will bite you. Like I said, we were a little naïve.

One trend that Meeker didn’t predict was the nasty issue of data ownership. We were just starting to become aware of the looming spectre of privacy.

The biggest Internet related story of 2010 was WikiLeaks. In February, Julian Assange’s site started releasing 260,000 sensitive diplomatic cables sent to them by Chelsea Manning, a US soldier stationed in Iraq. According to the governments of the world, this was an illegal release of classified material, tantamount to an act of espionage. According to public opinion, this was shit finally rolling uphill. We revelled in the revelations. Wikileaks and Julian Assange was taking it to the man.

That budding sense of optimism continued throughout the year. By December of 2010, the Arab Spring had begun. This was our virtual vindication – the awesome power of social media was a blinding light to shine on the darkest nooks and crannies of despotism and tyranny. The digital future was clear and bright. We would triumph thanks to technology. The Internet had helped put Obama in the White House. It had toppled corrupt regimes.

A decade later, we’re shell shocked to discover that the Internet is the source of a whole new kind of corruption.

The rigidly digitized ideals of Zuckerberg, Page, Brin et al seemed to be a call to arms: transparency, the elimination of bureaucracy, a free and open friction-free digital market, the sharing economy, a vast social network that would connect humanity in ways never imagined, connected devices in our pockets – in 2010 all things seemed possible. And we were naïve enough to believe that those things would all be good and moral and in our best interests.

But soon, we were smelling the stench that came from Silicon Valley. Those ideals were subverted into an outright attack on our privacy. Democratic elections were sold to the highest bidder. Ideals evaporated under the pressure of profit margins and expanding power. Those impossibly bright, impossibly young billionaire CEO’s of ten years ago are now testifying in front of Congress. The corporate culture of many tech companies reeks like a frat house on Sunday morning.

Is there a lesson to be learned? I hope so. I think it’s this. Technology won’t do the heavy lifting for us. It is a tool that is subject to our own frailty. It amplifies what it is to be human. It won’t eliminate greed or corruption unless we continually steer it in that direction. 

And I use the term “we” deliberately. We have to hold tech companies to a higher standard. We have to be more discerning of what we agree to. We have to start demanding better treatment and not be willing to trade our rights away with the click of an accept button. 

A lot of what could have been slipped through our fingers in the last 10 years.  It shouldn’t have happened. Not on our watch.

Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.

This is Why We Can’t Have Nice Things

Relevance is the new gold standard in marketing. In an  article in the Harvard Business Review written last year, John Zealley, Robert Wollan and Joshua Bellin — three senior execs at Accenture — outline five stages of marketing (paraphrased courtesy of a post from Phillip Nones):

  1. Mass marketing (up through the 1970s) – The era of mass production, scale and distribution.Marketing segmentation (1980s) – More sophisticated research enabling marketers to target customers in niche segments.
  2. Customer-level marketing (1990s and 2000s) – Advances in enterprise IT make it possible to target individuals and aim to maximize customer lifetime value.
  3. Loyalty marketing (2010s) – The era of CRM, tailored incentives and advanced customer retention.
  4. Relevance marketing (emerging) – Mass communication to the previously unattainable “Segment of One.”

This last stage – according to marketers past and present – should be the golden era of marketing:

“The perfect advertisement is one of which the reader can say, ‘This is for me, and me alone.” 

— Peter Drucker

“Audiences crave tailored messages that cater to them specifically and they are willing to offer information that enables marketers to do so.”

 Kevin Tash, CEO of Tack Media, a digital marketing agency in Los Angeles.

Umm…no! In fact, hell, no!

I agree that relevance is an important thing. And in an ethical world, the exchange Tash talks about would be a good thing, for both consumers and marketers. But we don’t live in such a world. The world we live in has companies like Facebook and Cambridge Analytica.

Stop Thinking Like a Marketer!

There is a cognitive whiplash that happens when our perspective changes from that of marketer to that of a consumer. I’ve seen it many times. I’ve even prompted it on occasion. But to watch it in 113 minutes of excruciating detail, you should catch “The Great Hack” on Netflix. 

The documentary is a journalistic peeling of the onion that is the Cambridge Analytica scandal. It was kicked off by the whistle blowing of Christopher Wylie, a contract programmer who enjoyed his 15 minutes of fame. But to me, the far more interesting story is that of Brittany Kaiser, the director of business Development of SCL Group, the parent company of Cambridge Analytica. The documentary digs into the tortured shift of perspective as she transitions from thinking like a marketer to a citizen who has just had her private data violated. It makes for compelling viewing.

Kaiser shifted her ideological compass about as far as one could possibly do, from her beginnings as an idealistic intern for Barack Obama and a lobbyist for Amnesty International to one of the chief architects of the campaigns supporting Trump’s presidential run, Brexit and other far right persuasion blitzkriegs. At one point, she justifies her shift to the right by revealing her family’s financial struggle and the fact that you don’t get paid much as an underling for Democrats or as a moral lobbyist. The big bucks are found in the ethically grey areas.  Throughout the documentary, she vacillates between the outrage of a private citizen and the rationalization of a marketer. She is a woman torn between two conflicting perspectives.

We marketers have to stop kidding ourselves and justifying misuse of personal data with statements like the one previously quoted from Kevin Tash. As people, we’re okay. I like most of the marketers I know. But as professional marketers, we have a pretty shitty track record. We trample privacy, we pry into places we shouldn’t and we gleefully high-five ourselves when we deliver the goods on a campaign — no matter who that campaign might be for and what its goals might be. We are very different people when we’re on the clock.

We are now faced with what may be the most important questions of our lives: How do we manage our personal data? Who owns it? Who stores it? Who has the right to use it? When we answer those questions, let’s do it as people, and not marketers. Because there is a lot more at stake here than the ROI rates on a marketing campaign.

Dear Facebook. It’s Not Me, It’s You

So, let’s say, hypothetically, one wanted to get break up with Facebook? Just how would one do that?

I heard one person say that swearing off Facebook was a “position of privilege.” It was an odd way of putting it, until I thought about it a bit. This person was right. Much as I’d like to follow in retired tech journalist Walter Mossberg’s footsteps and quit Facebook cold turkey, I don’t think I can. I am not in that position. I am not so privileged.

This is no way condones Facebook and its actions. I’m still pretty pissed off about that. I suspect I might well be in an abusive relationship. I have this suspicion because I looked it up on Mentalhealth.net, a website offered by the American Addictions Centers. According to them, an abusive relationship is

where one thing mistreats or misuses another thing. The important words in this definition are “mistreat” and “misuse”; they imply that there is a standard that describes how things should be treated and used, and that an abuser has violated that standard.

For the most part, only human beings are capable of being abusive, because only human beings are capable of understanding how things should be treated in the first place and then violating that standard anyway.”

That sounds bang on when I think about how Facebook has treated its users and their personal data. And everyone will tell you that if you’re in an unhealthy relationship, you should get out. But it’s not that easy. And that’s because of Metcalfe’s Law. Originally applied to telecommunication networks, it also applies to digitally mediated social networks. Metcalfe’s Law states that states that the value of a telecommunications network is proportional to the square of the number of connected users of the system.”

The example often used is a telephone. If you’re the only person with one, it’s useless. If everyone has one, it’s invaluable. Facebook has about 2.3 billion users worldwide. That’s one out of every three people on this planet. Do the math. That’s a ton of value. It makes Facebook what they call very “sticky” in Silicon Valley.

But it’s not just the number of users that makes Facebook valuable. It’s also the way they use it. Facebook has always intended to become the de facto platform for broad based social connection. As such, it is built of “weak ties” – those social bonds defined by Mark Granovetter almost 50 years ago which connect scattered nodes in a network. To go back to the afore-mentioned “position of privilege” comment, the privilege in this case is a lack of dependence on weak ties.

 

My kids could probably quite Facebook. At least, it would be easier for them then it would be for me. But they also are not in the stage of their life where weak ties are all that important. They use other platforms, like Snapchat, to communicate with their friends. It’s a channel built for strong ties. If they do need to bridge weak ties, they escalate their social postings, first to Instagram, then – finally – to their last resort: Facebook. It’s only through Facebook where they’ll reach parents, aunts, cousins and grandmas all at once.

It’s different for me. I have a lifetime of accumulated weak ties that I need to connect with all the time. And Facebook is the best way to do it. I connect with various groups, relatives, acquaintances and colleagues on an as needed basis.  I also need a Facebook presence for my business, because it’s expected by others that need to connect to me. I don’t have the privilege of severing those ties.

So, I’ve decided that I can’t quit Facebook. At least, not yet. But I can use Facebook differently – more impersonally. I can use it as a connection platform rather than a channel for personal expression. I can make sure as little of my personal data falls into Facebook’s hands as possible. I don’t need to post what I like, how I’m feeling, what my beliefs are or what I do daily. I can close myself off to Facebook, turning this into a passionless relationship. From now on, I’ll consider it a tool –  not a friend, not a confidante, not something I can trust – just a way to connect when I need to. My personal life is none of Facebook’s business – literally.

For me, it’s the first step in preventing more abuse.

The Strange Polarity of Facebook’s Moral Compass

For Facebook, 2018 came in like a lion, and went out like a really pissed off  Godzilla with a savagely bad hangover after the Mother of all New Year’s Eve parties.  In other words, it was not a good year.

As Zuckerberg’s 2018 shuddered to its close, it was disclosed that Facebook and Friends had opened our personal data kimonos for any of their “premier” partners. This was in direct violation of their own data privacy policy, which makes it even more reprehensible than usual. This wasn’t a bone-headed fumbling of our personal information. This was a fully intentional plan to financially benefit from that data in a way we didn’t agree to, hide that fact from us and then deliberately lie about it on more than one occasion.

I was listening to a radio interview of this latest revelation and one of the analysts  – social media expert and author Alexandria Samuel – mused about when it was that Facebook lost its moral compass. She has been familiar with the company since its earliest days, having the opportunity to talk to Mark Zuckerberg personally. In her telling, Zuckerberg is an evangelist that had lost his way, drawn to the dark side by the corporate curse of profit and greed.

But Siva Vaidhyanathan – the Robertson Professor of Modern Media Studies at the University of Virgina –  tells a different story. And it’s one that seems much more plausible to me. Zuckerberg may indeed be an evangelist, although I suspect he’s more of a megalomaniac. Either way, he does have a mission. And that mission is not opposed to corporate skullduggery. It fully embraces it. Zuckerberg believes he’s out to change the world, while making a shitload of money along the way. And he’s fine with that.

That came as a revelation to me. I spent a good part of 2018 wondering how Facebook could have been so horrendously cavalier with our personal data. I put it down to corporate malfeasance. Public companies are not usually paragons of ethical efficacy. This is especially true when ethics and profitability are diametrically opposed to each other. This is the case with Facebook. In order for Facebook to maintain profitability with its current revenue model, it has to do things with our private data we’d rather not know about.

But even given the moral vacuum that can be found in most corporate boardrooms, Facebook’s brand of hubris in the face of increasingly disturbing revelations seems off-note – out of kilter with the normal damage control playbook. Vaidhyanathan’s analysis brings that cognitive dissonance into focus. And it’s a picture that is disturbing on many levels.

siva v photo

Siva Vaidhyanathan

According to Vaidhyanathan, “Zuckerberg has two core principles from which he has never wavered. They are the founding tenets of Facebook. First, the more people use Facebook for more reasons for more time of the day the better those people will be. …  Zuckerberg truly believes that Facebook benefits humanity and we should use it more, not less. What’s good for Facebook is good for the world and vice-versa.

Second, Zuckerberg deeply believes that the records of our interests, opinions, desires, and interactions with others should be shared as widely as possible so that companies like Facebook can make our lives better for us – even without our knowledge or permission.”

Mark Zuckerberg is not the first tech company founder to have a seemingly ruthless god complex and a “bigger than any one of us” mission. Steve Jobs, Bill Gates, Larry Page, Larry Ellison; I could go on. What is different this time is that Zuckerberg’s chosen revenue model runs completely counter to the idea of personal privacy. Yes, Google makes money from advertising, but the vast majority of that is delivered in response to a very intentional and conscious request on the part of the user. Facebook’s gaping vulnerability is that it can only be profitable by doing things of which we’re unaware. As Vaidhyanathan says, “violating our privacy is in Facebook’s DNA.”

Which all leads to the question, “Are we okay with that?” I’ve been thinking about that myself. Obviously, I’m not okay with it. I just spent 720 words telling you so. But will I strip my profile from the platform?

I’m not sure. Give me a week to think about it.

Why Disruption is Becoming More Likely in the Data Marketplace

Another weak, another breach. 500 million records were hacked from Marriott, making it the second largest data breach in history, behind Yahoo’s breach of 3 billion user accounts.

For now. There will probably be a bigger breach. There will definitely be a more painful breach. And by painful, I mean painful to you and me.  It’s in that pain – specifically, the degree of the pain – that the future of how we handle our personal data lies.

Markets innovate along paths of least resistance. Market development is a constantly evolving dynamic tension between innovation and resistance. If there is little resistance, markets will innovate in predictable ways from their current state. If this innovation leads to push back from the market, we encounter resistance.  When markets meet significant resistance, disruption occurs, opening the door for innovation in new directions to get around the resistance of the marketplace.  When we talk about data, we are talking about a market where value is still in the process of defining itself. And it’s in the definition of value where we’ll find the potential market resistance for data.

Individual data is a raw resource. It doesn’t have value until it becomes “Big.” Personal data needs to be aggregated and structured to become valuable. This creates a dilemma for us. Unless we provide the raw material, there is no “big” data possible. This makes it valuable to others, but not necessarily to ourselves.

Up to now, the value we have exchanged our privacy for has been convenience. It’s easier for us to store our credit card data with Amazon so we can enable one-click ordering. And we feel this exchange has been a bargain. But it remains an asymmetrical exchange. Our data has no positive value to us, only negative. We can be hurt by our data, but other than the afore-mentioned exchange for convenience, it doesn’t really help us. That is why we’ve been willing to give it away for so little. But once it’s aggregated and becomes “big”, it has tremendous value to the people we give it to. It also has value to those who wish to steal that data from those who we have entrusted it with. The irony here is that whether that data is in the “right” hands or the “wrong” ones, it can still mean pain for us. The differentiator is the degree of that pain.

Let’s examine the potential harm that could come from sharing our data. How painful could this get? Literally every warning we write about here at Mediapost has data at the core. Just yesterday, fellow Insider Steven Rosenbaum wrote about how the definition of warfare has changed. The fight isn’t for land. War is now waged for our minds. And data is used to target those minds.

Essentially, sharing our data makes us vulnerable to being targeted. And the outcome of that targeting can range from simply annoying to life-alteringly dangerous. Even the fact that we refer to it as targeting should raise a red flag. There’s a reason why we use a term typically associated with a negative outcome for those on the receiving end. You’re very seldom targeted for things that are beneficial to you. And that’s true no matter who’s the one doing the targeting. At its most benign, targeting is used to push some type of messaging – typically advertising – to you. But you could also be targeted by Russian hackers in an attempt to subvert democracy. Most acutely, you could be targeted for financial fraud. Or blackmail. Targeting is almost never a good thing. The degree of harm can vary, but the cause doesn’t. Our data – the data we share willingly – makes targeting possible.

We are in an interesting time for data. We have largely shrugged off the pain of the various breaches that have made it to the news. We still hand over our personal data with little to no thought of the consequences. And because we still participate by providing the raw material, we have enabled the development of an entire data marketplace. We do so because there is no alternative without making sacrifices we are not willing to make. But as the degree of personal pain continues to get dialed up, all the prerequisites of market disruption are being put in place. Breaches will continue. The odds of us being personally affected will continue to climb. And innovators will find solutions to this problem that will be increasingly easy to adopt.

For many, many reasons, I don’t think the current trajectory of the data marketplace is sustainable. I’m betting on disruption.