Splitting Ethical Hairs in an Online Ecosystem

In looking for a topic for today’s post, I thought it might be interesting to look at the Lincoln Project. My thought was that it would be an interesting case study in how to use social media effectively.

But what I found is that the Lincoln Project is currently imploding due to scandal. And you know what? I wasn’t surprised. Disappointed? Yes. Surprised? No.

While we on the left of the political spectrum may applaud what the Lincoln Project was doing, let’s make no mistake about the tactics used. It was the social media version of Nixon’s Dirty Tricks. The whole purpose was to bait Trump into engaging in a social media brawl. This was political mudslinging, as practiced by veteran warriors. The Lincoln Project was comfortable with getting down and dirty.

Effective? Yes. Ethical? Borderline.

But what it did highlight is the sordid but powerful force of social media influence. And it’s not surprising that those with questionable ethics, as some of the Lincoln Project leaders have proven to be, were attracted to it.

Social media is the single biggest and most effective influencer on human behavior ever invented. And that should scare the hell out of us, because it’s an ecosystem in which sociopaths will thrive.

A definition of Antisocial Personality Disorder (the condition from which sociopaths suffer) states, “People with ASPD may also use ‘mind games’ to control friends, family members, co-workers, and even strangers. They may also be perceived as charismatic or charming.”

All you have to do is substitute “social media” for “mind games,” and you’ll get my point.  Social media is sociopathy writ large.

That’s why we — meaning marketers — have to be very careful what we wish for. Since Google cracked down on personally identifiable information, following in the footsteps of Apple, there has been a great hue and cry from the ad-tech community about the unfairness of it all. Some of that hue and cry has issued forth here at MediaPost, like Ted McConnell’s post a few weeks ago, “Data Winter is Coming.”

And it is data that’s at the center of all this. Social media continually pumps personal data into the online ecosystem. And it’s this data that is the essential life force of the ecosystem. Ad tech sucks up that data as a raw resource and uses it for ad delivery across multiple channels. That’s the whole point of the personal identifiers that Apple and Google are cracking down on.

I suppose one could  draw an artificial boundary between social media and ad targeting in other channels, but that would be splitting hairs. It’s all part of the same ecosystem. Marketers want the data, no matter where it comes from, and they want it tied to an individual to make targeting their campaigns more effective.

By building and defending an ecosystem that enables sociopathic predators, we are contributing to the problem. McConnell and I are on opposite sides of the debate here. While I don’t disagree with some of his technical points about the efficacy of Google and Apple’s moves to protect privacy, there is a much bigger question here for marketers: Should we protect user privacy, even if it makes our jobs harder?

There has always been a moral ambiguity with marketers that I find troubling. To be honest, it’s why I finally left this industry. I was tired of the “yes, but” justification that ignored all the awful things that were happening for the sake of a handful of examples that showed the industry in a better light.

And let’s just be honest about this for a second: using personally identifiable data to build a more effective machine to influence people is an awful thing. Can it be used for good? Yes. Will it be? Not if the sociopaths have anything to say about it. It’s why the current rogue’s gallery of awful people are all scrambling to carve out as big a piece of the online ecosystem as they can.

Let’s look at nature as an example. In biology, a complex balance has evolved between predators and prey. If predators are too successful, they will eliminate their prey and will subsequently starve. So a self-limiting cycle emerges to keep everything in balance. But if the limits are removed on predators, the balance is lost. The predators are free to gorge themselves.

When it comes to our society, social media has removed the limits on “prey.” Right now, there is a never-ending supply.

It’s like we’re building a hen house, inviting a fox inside and then feigning surprise when the shit hits the fan. What the hell did we expect?

Facebook Vs. Apple Vs. Your Privacy

As I was writing last week’s words about Mark Zuckerberg’s hubris-driven view of world domination, little did I know that the next chapter was literally being written. The very next day, a full-page ad from Facebook ran in The New York Times, The Washington Post and The Wall Street Journal attacking Apple for building privacy protection prompts into iOS 14.

It will come as a surprise to no one that I line up firmly on the side of Apple in this cat fight. I have always said we need to retain control over our personal data, choosing what’s shared and when. I also believe we need to have more control over the nature of the data being shared. iOS 14 is taking some much-needed steps in that direction.

Facebook is taking a stand that sadly underlines everything I wrote just last week — a disingenuous stand for a free-market environment — by unfurling the “Save small business” banner. Zuckerberg loves to stand up for “free” things — be it speech or markets — when it serves his purpose.

And the hidden agenda here is not really hidden at all. It’s not the small business around the corner Mark is worried about. It’s the 800-billion-dollar business that he owns 60% of the voting shares in.

The headline of the ad reads, “We’re standing up to Apple for small businesses everywhere.”

Ummm — yeah, right.

What you’re standing up for, Mark, is your revenue model, which depends on Facebook’s being free to hoover up as much personal data on you as possible, across as many platforms as possible.

The only thing that you care about when it comes to small businesses is that they spend as much with Facebook as possible. What you’re trying to defend is not “free” markets or “free” speech. What you’re defending is about the furthest thing imaginable away from  “free.”  It’s $70 billion plus in revenues and $18 and a half billion in profits. What you’re trying to protect is your number-five slot on the Forbes richest people in the world list, with your net worth of $100 billion.

Then, on the very next day, Facebook added insult to injury with a second ad, this time defending the “Free Internet,”  saying Apple “will change the internet as we know it” by forcing websites and blogs “to start charging you subscription fees.”

Good. The “internet as we know it” is a crap sandwich. “Free” has led us to exactly where we are now, with democracy hanging on by a thread, with true journalism in the last paroxysms of its battle for survival, and with anyone with half a brain feeling like they’re swimming in a sea of stupidity.

Bravo to Apple for pushing us away from the toxicity of “free” that comes with our enthralled reverence for “free” things to prop up a rapidly disintegrating information marketplace. If we accept a free model for our access to information, we must also accept advertising that will become increasingly intrusive, with even less regard for our personal privacy. We must accept all the things that come with “free”: the things that have proven to be so detrimental to our ability to function as a caring and compassionate democratic society over the past decade.

In doing the research for this column, I ran into an op-ed piece that ran last year in The New York Times. In it, Facebook co-founder Chris Hughes lays out the case for antitrust regulators dismantling Facebook’s dominance in social media.

This is a guy who was one of Zuckerberg’s best friends in college, who shared in the thrill of starting Facebook, and whose name is on the patent for Facebook’s News Feed algorithm. It’s a major move when a guy like that, knowing what he knows, says, “The most problematic aspect of Facebook’s power is Mark’s unilateral control over speech. There is no precedent for his ability to monitor, organize and even censor the conversations of two billion people.”

Hughes admits that the drive to break up Facebook won’t be easy. In the end, it may not even be successful. But it has to be attempted.

Too much power sits in the Zuckerberg’s hands. An attempt has to be made to break down the walls behind which our private data is being manipulated. We cannot trust Facebook — or Mark Zuckerberg — to do the right thing with the data. It would be so much easier if we could, but it has been proven again and again and again that our trust is misplaced.

The very fact that those calling the shots at Facebook believe you’ll fall for yet another public appeal wrapped in some altruistic bullshit appeal about protecting “free” that’s as substantial as Saran Wrap should be taken as an insult. It should make you mad as hell.

And it should put Apple’s stand to protect your privacy in the right perspective: a long overdue attempt to stop the runaway train that is social media.

Looking At The World Through Zuckerberg-Colored Glasses

Mark Zuckerberg has managed to do something almost no one else has been able to do. He has actually been able to find one small patch of common ground between the far right and the far left in American politics. It seems everybody hates Facebook, even if it’s for different reasons.

The right hates the fact that they’re not given free rein to say whatever they want without Facebook tagging their posts as misinformation. The left worries about the erosion of privacy. And antitrust legislators feel Facebook is just too powerful and dominant in the social media market. Mark Zuckerberg has few friends in Washington — on either side of the aisle.

The common denominator here is control. Facebook has too much of it, and no one likes that. The question on the top of my mind is, “What is Facebook intending to do with that control?” Why is dominance an important part of Zuckerberg’s master plan?

Further, just what is that master plan?  Almost four years ago, in the early days of 2017, Zuckerberg issued a 6,000-word manifesto. In it, he addressed what he called “the most important question of all.” That question was, “Are we building the world we all want?”

According to the manifesto, the plan for Facebook includes “spreading prosperity and freedom, promoting peace and understanding, lifting people out of poverty, and accelerating science.”

Then, two years later, Zuckerberg issued another lengthy memo about his vision regarding privacy and the future of communication, which “will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure.” He explained that Facebook and Instagram are like a town square, a public place for communication. But WhatsApp and Messenger are like your living room, where you can have private conversations without worrying about those conversations.

So, how is all that wonderfulness going, anyway?

Well, first of all, there’s what Mark says, and what Facebook actually does. When he’s not firing off biennial manifestos promising a cotton-candy-colored world, he’s busy assembling all the pieces required to suck up as much data on you as possible, and fighting lawsuits when he gets caught doing something he shouldn’t be.

You have to understand that for Zuckerberg, all these plans are built on a common foundation: Everything happens on a platform that Facebook owns. And those platforms are paid for by advertising. And advertising needs data. And therein lies the problem: What the hell is Facebook doing with all this data?

I’m pretty sure it’s not spreading prosperity and freedom or promoting peace and understanding. Quite the opposite. If you look at Facebook’s fingerprints that are all over the sociological dumpster fire that has been the past four years, you could call them the Keyser Söze of shit disturbing.

And it’s only going to get worse. Facebook and other participants in the attention economy are betting heavily on facial recognition technology. This effectively eliminates our last shred of supposed anonymity online. It forever links our digital dust trail with our real-world activities. And it dumps even more information about you into the voracious algorithms of Facebook, Google and other data devourers. Again, what might be the plans for this data: putting in place the pieces of a more utopian world, or meeting next quarter’s revenue projections?

Here’s the thing. I don’t think Zuckerberg is being wilfully dishonest when he writes these manifestos. I think — at the time — he actually believes them. And he probably legitimately thinks that Facebook is the best way to accomplish them. Zuckerberg always believes he’s the smartest one in the room. And he — like Steve Jobs — has a reality distortion field that’s always on. In that distorted reality, he believes Facebook — a company that is entirely dependent on advertising for survival — can be trusted with all our data. If we just trust him, Facebook will all be okay.

The past four years have proven over and over again that that’s not true. It’s not even possible. No matter how good the intentions you go in with, the revenue model that fuels Facebook will subvert those intentions and turn them into something corrosive.

I think David Fincher summed up the problem nicely in his movie “The Social Network.” There, screenwriter Aaron Sorkin nailed the Zuckerberg nail on the head when he wrote the scene where Zuckerberg’s lawyer said to him, “You’re not an asshole, Mark. You’re just trying so hard to be.”

Facebook represents a lethal mixture that has all the classic warning signs of an abusive relationship:

  • A corporation that can survive only when its advertisers are happy.
  • Advertisers that are demanding more and more data they can use to target prospects.
  • A bro-culture where Facebook folks think they’re smarter than everyone else and believe they can actually thread the needle between being fabulous successful as an advertising platform and not being complete assholes.
  • And an audience of users who are misplacing their trust by buying into the occasional manifesto, while ignoring the red flags that are popping up every day.

Given all these factors, the question becomes: Will splitting up Facebook be a good or bad thing? It’s a question that will become very pertinent in the year to come. I’d love to hear your thoughts.

Why Technology May Not Save Us

We are a clever race. We’re not as smart as we think we are, but we are pretty damn smart. We are the only race who has managed to forcibly shift the eternal cycles of nature for our own benefit. We have bent the world to our will. And look how that’s turning out for us.

For the last 10,000 years our cleverness has set us apart from all other species on earth. For the last 1000 years, the pace of that cleverness has accelerated. In the last 100 years, it has been advancing at breakneck speed. Our tools and ingenuity have dramatically reshaped our lives. our everyday is full of stuff we couldn’t imagine just a few short decades ago.

That’s a trend that’s hard to ignore. And because of that, we could be excused for thinking the same may be true going forward. When it comes to thinking about technology, we tend to do so from a glass half full perspective. It’s worked for us in the past. It will work for us in the future. There is no problem too big that our own technological prowess cannot solve.

But maybe it won’t. Maybe – just maybe – we’re dealing with another type of problem now to which technology is not well suited as a solution. And here are 3 reasons why.

The Unintended Consequences Problem

Technology solutions focus on the proximate rather than the distal – which is a fancy way of saying that technology always deals with the task at hand. Being technology, these solutions usually come from an engineer’s perspective, and engineers don’t do well with nuance. Complicated they can deal with. Complexity is another matter.

I wrote about this before when I wondered why tech companies tend to be confused by ethics. It’s because ethics falls into a category of problems known as a wicked problem. Racial injustice is another wicked problem. So is climate change. All of these things are complex and messy. Their dependence on collective human behavior makes them so. Engineers don’t like wicked problems, because they are by definition concretely non-solvable. They are also hotbeds of unintended consequences.

In Collapse, anthropologist Jared Diamond’s 2005 exploration of failed societies, past and present, Diamond notes that when we look forward, we tend to cling to technology as a way to dodge impending doom. But he notes, “underlying this expression of faith is the implicit assumption that, from tomorrow onwards, technology will function primarily to solve existing problems and will cease to create new problems.”

And there’s the rub. For every proximate solution it provides, technology has a nasty habit of unleashing scads of unintended new problems. Internal combustion engines, mechanized agriculture and social media come to mind immediately as just three examples. The more complex the context of the problem, the more likely it is that the solution will come with unintended consequences.

The 90 Day Problem

Going hand in hand with the unintended consequence problem is the 90 Day problem. This is a port-over from the corporate world, where management tends to focus on problems that can be solved in 90 days. This comes from a human desire to link cause and effect. It’s why we have to-do lists. We like to get shit done.

Some of the problems we’re dealing with now – like climate change – won’t be solved in 90 days. They won’t be solved in 90 weeks or even 90 months. Being wicked problems, they will probably never be solved completely. If we’re very, very, very lucky and we start acting immediately and with unprecedented effort, we might be seeing some significant progress in 90 years.

This is the inconvenient truth of these problems. The consequences are impacting us today but the payoff for tackling them is – even if we do it correctly – sometime far in the future, possibly beyond the horizon of our own lifetimes. We humans don’t do well with those kinds of timelines.

The Alfred E. Neuman Problem

The final problem with relying on technology is that we think of it as a silver bullet. The alternative is a huge amount of personal sacrifice and effort with no guarantee of success. So, it’s easier just to put our faith in technology and say, “What, Me Worry?” like Mad Magazine mascot Alfred E. Neuman. It’s much easier to shift the onus for us surviving our own future to some nameless, faceless geek somewhere who’s working their way towards their “Eureka” moment.

While that may be convenient and reassuring, it’s not very realistic. I believe the past few years – and certainly the past few months – have shown us that all of us have to make some very significant changes in our lives and be prepared to rethink what we thought our future might be. At the very least, it means voting for leadership committed to fixing problems rather than ignoring them in favor of the status quo.

I hope I’m wrong, but I don’t think technology is going to save our ass this time.

A.I. and Our Current Rugged Landscape

In evolution, there’s something called the adaptive landscape. It’s a complex concept, but in the smallest nutshell possible, it refers to how fit species are for a particular environment. In a relatively static landscape, status quos tend to be maintained. It’s business as usual. 

But a rugged adaptive landscape —-one beset by disruption and adversity — drives evolutionary change through speciation, the introduction of new and distinct species. 

The concept is not unique to evolution. Adapting to adversity is a feature in all complex, dynamic systems. Our economy has its own version. Economist Joseph Schumpeter called them Gales of Creative Destruction.

The same is true for cultural evolution. When shit gets real, the status quo crumbles like a sandcastle at high tide. When it comes to life today and everything we know about it, we are definitely in a rugged landscape. COVID-19 might be driving us to our new future faster than we ever suspected. The question is, what does that future look like?

Homo Deus

In his follow up to his best-seller “Sapiens: A Brief History of Humankind,” author Yuval Noah Harari takes a shot at predicting just that. “Homo Deus: A Brief History of Tomorrow” looks at what our future might be. Written well before the pandemic (in 2015) the book deals frankly with the impending irrelevance of humanity. 

The issue, according to Harari, is the decoupling of intelligence and consciousness. Once we break the link between the two, the human vessels that have traditionally carried intelligence become superfluous. 

In his book, Harari foresees two possible paths: techno-humanism and Dataism. 

Techno-humanism

In this version of our future, we humans remain essential, but not in our current form. Thanks to technology, we get an upgrade and become “super-human.”

Dataism

Alternatively, why do we need humans at all? Once intelligence becomes decoupled from human consciousness, will it simply decide that our corporeal forms are a charming but antiquated oddity and just start with a clean slate?

Our Current Landscape

Speaking of clean slates, many have been talking about the opportunity COVID-19 has presented to us to start anew. As I was writing this column, I received a press release from MIT promoting a new book “Building the New Economy,” edited by Alex Pentland. I haven’t read it yet, but based on the first two lines in the release, it certainly seems to be following this type of thinking:“With each major crisis, be it war, pandemic, or major new technology, there has been a need to reinvent the relationships between individuals, businesses, and governments. Today’s pandemic, joined with the tsunami of data, crypto and AI technologies, is such a crisis.”

We are intrigued by the idea of using the technologies we have available to us to build a societal framework less susceptible to inevitable Black Swans. But is this just an invitation to pry open Pandora’s Box and allow the future Yuval Noah Harari is warning us about?

The Debate 

Harari isn’t the only one seeing the impending doom of the human race. Elon Musk has been warning us about it for years. As we race to embrace artificial intelligence, Musk sees the biggest threat to human existence we have ever faced. 

“I am really quite close, I am very close, to the cutting edge in AI and it scares the hell out of me,” warns Musk. “It’s capable of vastly more than almost anyone knows and the rate of improvement is exponential.”

There are those that pooh-pooh Musk’s alarmism, calling it much ado about nothing. Noted Harvard cognitive psychologist and author Steven Pinker, whose rose-colored vision of humanity’s future reliably trends up and to the right, dismissed Musk’s warnings with this: “If Elon Musk was really serious about the AI threat, he’d stop building those self-driving cars, which are the first kind of advanced AI that we’re going to see.”

In turn, Musk puts Pinker’s Pollyanna perspective down to human hubris: “This tends to plague smart people. They define themselves by their intelligence and they don’t like the idea that a machine could be way smarter than them, so they discount the idea — which is fundamentally flawed.”

From Today Forward

This brings us back to our current adaptive landscape. It’s rugged. The peaks and valleys of our day-to-day reality are more rugged then they have ever been — at least in our lifetimes. 

We need help. And when you’re dealing with a massive threat that involves probability modeling and statistical inference, more advanced artificial intelligence is a natural place to look. 

Would we trade more invasive monitoring of our own bio-status and aggregation of that data to prevent more deaths? In a heartbeat.

Would we put our trust in algorithms that can instantly crunch vast amounts of data our own brains couldn’t possibly comprehend? We already have.

Will we even adopt connected devices constantly streaming the bits of data that define our existence to some corporate third party or government agency in return for a promise of better odds that we can extend that existence? Sign us up.

We are willingly tossing the keys to our future to the Googles, Apples, Amazons and Facebooks of the world. As much as the present may be frightening, we should consider the steps we’re taking carefully.

If we continue rushing down the path towards Yuval Noah Harari’s Dataism, we should be prepared for what we find there: “This cosmic data-processing system would be like God. It will be everywhere and will control everything, and humans are destined to merge into it.”

The Saddest Part about Sadfishing

There’s a certain kind of post I’ve always felt uncomfortable with when I see it on Facebook. You know the ones I’m talking about — where someone volunteers excruciatingly personal information about their failing relationships, their job dissatisfaction, their struggles with personal demons. These posts make me squirm.

Part of that feeling is that, being of British descent, I deal with emotions the same way the main character’s parents are dealt with in the first 15 minutes of any Disney movie: Dispose of them quickly, so we can get on with the business at hand.

I also suspect this ultra-personal sharing  is happening in the wrong forum. So today, I’m trying to put an empirical finger on my gut feelings of unease about this particular topic.

After a little research, I found there’s a name for this kind of sharing: sadfishing. According to Wikipedia, “Sadfishing is the act of making exaggerated claims about one’s emotional problems to generate sympathy. The name is a variation on ‘catfishing.’ Sadfishing is a common reaction for someone going through a hard time, or pretending to be going through a hard time.”

My cynicism towards these posts probably sounds unnecessarily harsh. It goes against our empathetic grain. These are people who are just calling out for help. And one of the biggest issues with mental illness is the social stigma attached to it. Isn’t having the courage to reach out for help through any channel available — even social media — a good thing?

I do believe asking for help is undeniably a good thing. I wish I myself was better able to do that. It’s Facebook I have the problem with. Actually, I have a few problems with it.

It’s Complicated

Problem #1: Even if a post is a genuine request for help, the poster may not get the type of response he or she needs.

Mental Illness, personal grief and major bumps on our life’s journey are all complicated problems — and social media is a horrible place to deal with complicated problems. It’s far too shallow to contain the breadth and depth of personal adversity.

Many read a gut-wrenching, soul-scorching post (genuine or not), then leave a heart or a sad face, and move on. Within the paper-thin social protocols of Facebook, this is an acceptable response. And it’s acceptable because we have no skin in the game. That brings us to problem #2.

Empathy is Wired to Work Face-to-Face

Our humanness works best in proximity. It’s the way we’re wired.

Let’s assume someone truly needs help. If you’re physically with them and you care about them, things are going to get real very quickly. It will be a connection that happens at all possible levels and through all senses.

This will require, at a minimum, hand-holding and, more likely, hugs, tears and a staggering personal commitment  to help this person. It is not something taken or given lightly. It can be life-changing on both sides.

You can’t do it at arm’s length. And you sure as hell can’t do it through a Facebook reply.

The Post That Cried Wolf

But the biggest issue I have is that social media takes a truly genuine and admirable instinct, the simple act of helping someone, and turns it into just another example of fake news.

Not every plea for help on Facebook is exaggerated just for the sake of gaining attention, but some of them are.

Again, Facebook tends to take the less admirable parts of our character and amplify them throughout our network. So, if you tend to be narcissistic, you’re more apt to sadfish. If you have someone you know who continually reaches out through Facebook with uncomfortably personal posts of their struggles, it may be a sign of a deeper personality disorder, as noted in this post on The Conversation.

This phenomenon can create a kind of social numbness that could mask genuine requests for help. For the one sadfishing, It becomes another game that relies on generating the maximum number of social responses. Those of us on the other side quickly learn how to play the game. We minimize our personal commitment and shield ourselves against false drama.

The really sad thing about all of this is that social media has managed to turn legitimate cries for help into just more noise we have to filter through.

But What If It’s Real?

Sadfishing aside, for some people Facebook might be all they have in the way of a social lifeline. And in this case, we mustn’t throw the baby out with the bathwater. If someone you know and care about has posted what you suspect is a genuine plea for help, respond as humans should: Reach out in the most personal way possible. Elevate the conversation beyond the bounds of social media by picking up the phone or visiting them in person. Create a person-to-person connection and be there for them.

Why Good Tech Companies Keep Being Evil

You’d think we’d have learned by now. But somehow it still comes as a shock to us when tech companies are exposed as having no moral compass.

Slate recently released what it called the “Evil List”  of 30 tech companies compiled through a ballot sent out to journalists, scholars, analysts, advocates and others. Slate asked them which companies were doing business in the way that troubled them most. Spoiler alert: Amazon, Facebook and Google topped the list.  But they weren’t alone. Rounding out the top 10, the list of culprits included Twitter, Apple, Microsoft and Uber.

Which begs the question: Are tech companies inherently evil — like, say a Monsanto or Phillip Morris — or is there something about tech that positively correlates with “evilness”?

I suspect it’s the second of these.  I don’t believe Silicon Valley is full of fundamentally evil geniuses, but doing business as usual at a successful tech firm means there will be a number of elemental aspects of the culture that take a company down the path to being evil.

Cultism, Loyalism and Self-Selection Bias

A successful tech company is a belief-driven meat grinder that sucks in raw, naïve talent on one end and spits out exhausted and disillusioned husks on the other. To survive in between, you’d better get with the program.

The HR dynamics of a tech startup have been called a meritocracy, where intellectual prowess is the only currency.

But that’s not quite right. Yes, you have to be smart, but it’s more important that you’re loyal. Despite their brilliance, heretics are weeded out and summarily turfed, optionless in more ways than one. A rigidly molded group-think mindset takes over the recruitment process, leading to an intellectually homogeneous monolith.

To be fair, high growth startups need this type of mental cohesion. As blogger Paras Chopra said in a post entitled “Why startups need to be cult-like, “The reason startups should aim to be like cults is because communication is impossible between people with different values.” You can’t go from zero to 100 without this sharing of values.

But necessary or not, this doesn’t change the fact that your average tech star up is a cult, with all the same ideological underpinnings. And the more cult-like a culture, the less likely it is that it will take the time for a little ethical navel-gazing.

A Different Definition of Problem Solving

When all you have is a hammer, everything looks like a nail. And for the engineer, the hammer that fixes everything is technology. But, as academic researchers Emanuel Moss and Jacob Metcalf discovered, this brand of technical solutionism can lead to a corporate environment where ethical problems are ignored because they are open-ended, intractable questions. In a previous column I referred to them as “wicked problems.”

As Moss and Metcalf found, “Organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes ‘intractable, like it’s too big of a problem to tackle.’”

If you take this to the extreme, you get the Cambridge Analytica example, where programmer Christopher Wylie was so focused on the technical aspects of the platform he was building that he lost sight of the ethical monster he was unleashing.

A Question of Leadership

Of course, every cult needs a charismatic leader, and this is abundantly true for tech-based companies. Hubris is a commodity not in short supply among the C-level execs of tech.

It’s not that they’re assholes (well, ethical assholes anyway). It’s just that they’re, umm, highly focused and instantly dismissive of any viewpoint that’s not the same as their own. It’s the same issue I mentioned before about the pitfalls of expertise — but on steroids.

I suspect that if you did an ethical inventory of Mark Zuckerberg, Jeff Bezos, Larry Page, Sergey Brin, Travis Kalanik, Reid Hoffman and the rest, you’d find that — on the whole — they’re not horrible people. It’s just that they have a very specific definition of ethics as it pertains to their company. Anything that falls outside those narrowly defined boundaries is either dismissed or “handled” so it doesn’t get in the way of the corporate mission.

Speaking of corporate missions, leaders and their acolytes often are unaware — often intentionally — of the nuances of unintended consequences. Most tech companies develop platforms that allow disruptive new market-based ecosystems to evolve on their technological foundations. Disruption always unleashes unintended social consequences. When these inevitably happen, tech companies generally handle them one of three ways:

  1. Ignore them, and if that fails…
  2. Deny responsibility, and if that fails…
  3. Briefly apologize, do nothing, and then return to Step 1.

There is a weird type of idol worship in tech. The person atop the org chart is more than an executive. They are corporate gods — and those that dare to be disagreeable are quickly weeded out as heretics. This helps explain why Facebook can be pilloried for attacks on personal privacy and questionable design ethics, yet Mark Zuckerberg still snags a 92% CEO approval rating on Glassdoor.com.These fundamental characteristics help explain why tech companies seem to consistently stumble over to the dark side. But there’s an elephant in the room we haven’t talked about. Almost without exception, tech business models encourage evil behavior. Let’s hold that thought for a future discussion.

A Troubling Prognostication

It’s that time of year again. My inbox is jammed with pitches from PR flacks trying to get some editorial love for their clients. In all my years of writing, I think I have actually taken the bait maybe once or twice. That is an extremely low success rate. So much for targeting.

In early January, many of the pitches offer either reviews of 2019 or predictions for 2020.  I was just about to hit the delete button on one such pitch when something jumped out at me: “The number-one marketing trend for 2020 will be CDPs: customer data platforms.”

I wasn’t surprised by that. It makes sense. I know there’s a truckload of personal data being collected from everyone and their dog. Marketers love platforms. Why wouldn’t these two things come together?

But then I thought more about it — and immediately had an anxiety attack. This is not a good thing. In fact, this is a catastrophically terrible thing. It’s right up there with climate change and populist politics as the biggest world threats that keep me up at night.

To close out 2019,  fellow Insider Maarten Albarda gave you a great guide on where not to spend your money. In that column, he said this: “Remember when connected TVs, Google Glass and the Amazon Fire Phone were going to provide break-through platforms that would force mass marketing out of the box, and into the promised land of end-to-end, personalized one-on-one marketing?”

Ah, marketing nirvana: the Promised Land! The Holy Grail of personalized marketing. A perfect, friction-free direct connection between the marketer and the consumer.

Maarten went on to say that social media is one of the channels you shouldn’t be throwing money into, saying, “It’s also true that we have yet to see a compelling case where social media played a significant role in the establishment or continued success of a brand or service.”

I’m not sure I agree with this, though I admit I don’t have the empirical data to back up my opinion. But I do have another, darker reason why we should shut off the taps providing the flow of revenue to the usual social suspects. Social media based on an advertising revenue model is a cancerous growth — and we have to shut off its blood flow.

Personalized one-to-one marketing — that Promised Land —  cannot exist without a consistent and premeditated attack on our privacy. It comes at a price we should not be prepared to pay.

It depends on us trusting profit-driven corporations that have proven again and again that they shouldn’t be trusted. It is fueled by our darkest and least admirable motives.

The ecosystem that is required to enable one-to-one marketing is a cesspool of abuse and greed. In a pristine world of marketing with players who sport shiny ideals and rock-solid ethics, maybe it would be okay. Maybe. Personally, I wouldn’t take that bet. But in the world we actually live and work in, it’s a sure recipe for disaster.

To see just how subversive data-driven marketing can get, read “Mindf*ck” by Christopher Wylie. If that name sounds vaguely familiar to you, let me jog your memory. Wylie is the whistleblower who first exposed the Cambridge Analytica scandal. An openly gay, liberal, pink-haired Canadian, he seems an unlikely candidate to be the architect of the data-driven “Mindf*ck” machine that drove Trump into office and the Brexit vote over the 50% threshold.

Wylie admits to being blinded by the tantalizing possibilities of what he was working on at Cambridge Analytica: “Every day, I overlooked, ignored, or explained away warning signs. With so much intellectual freedom, and with scholars from the world’s leading universities telling me we were on the cusp of ‘revolutionizing’ social science, I had gotten greedy, ignoring the dark side of what we were doing.”

But Wylie is more than a whistleblower. He’s a surprisingly adept writer who has a firm grasp on not just the technical aspects, but also the psychology behind the weaponization of data. If venture capitalist Roger McNamee’s tell-all expose of Facebook, “Zucked,”  kept you up at night, “Mindf*ck” will give you screaming night terrors.

I usually hold off jumping on the year-end prognostication bandwagon, because I’ve always felt it’s a mug’s game. I would like to think that 2020 will be the year when the world becomes “woke” to the threat of profit-driven data abuse — but based on our collective track record of ignoring inconvenient truths, I’m not holding my breath.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

Looking Back at a Decade That’s 99.44% Done

Remember 2010? For me that was a pretty important year. It was the year I sold my digital marketing business. While I would continue to actively work in the industry for another 3 years, for me things were never the same as they were in 2010. And – looking back – I realize that’s pretty well true for most of us. We were more innocent and more hopeful. We still believed that the Internet would be the solution, not the problem.

In 2010, two big trends were jointly reshaping our notions of being connected. Early in the year, former Morgan Stanley analyst Mary Meeker laid them out for us in her “State of the Internet” report. Back then, just three years after the introduction of the iPhone, internet usage from mobile devices hadn’t even reached double digits as a percentage of overall traffic. Meeker knew this was going to change, and quickly. She saw mobile adoption on track to be the steepest tech adoption curve in history. She was right. Today, over 60% of internet usage comes from a mobile device.

The other defining trend was social media. Even then, Facebook had about 600 million users, or just under 10% of the world’s population. When you had a platform that big – connecting that many people – you just knew the consequences will be significant. There were some pretty rosy predications for the impact of social media.

Of course, it’s the stuff you can’t predict that will bite you. Like I said, we were a little naïve.

One trend that Meeker didn’t predict was the nasty issue of data ownership. We were just starting to become aware of the looming spectre of privacy.

The biggest Internet related story of 2010 was WikiLeaks. In February, Julian Assange’s site started releasing 260,000 sensitive diplomatic cables sent to them by Chelsea Manning, a US soldier stationed in Iraq. According to the governments of the world, this was an illegal release of classified material, tantamount to an act of espionage. According to public opinion, this was shit finally rolling uphill. We revelled in the revelations. Wikileaks and Julian Assange was taking it to the man.

That budding sense of optimism continued throughout the year. By December of 2010, the Arab Spring had begun. This was our virtual vindication – the awesome power of social media was a blinding light to shine on the darkest nooks and crannies of despotism and tyranny. The digital future was clear and bright. We would triumph thanks to technology. The Internet had helped put Obama in the White House. It had toppled corrupt regimes.

A decade later, we’re shell shocked to discover that the Internet is the source of a whole new kind of corruption.

The rigidly digitized ideals of Zuckerberg, Page, Brin et al seemed to be a call to arms: transparency, the elimination of bureaucracy, a free and open friction-free digital market, the sharing economy, a vast social network that would connect humanity in ways never imagined, connected devices in our pockets – in 2010 all things seemed possible. And we were naïve enough to believe that those things would all be good and moral and in our best interests.

But soon, we were smelling the stench that came from Silicon Valley. Those ideals were subverted into an outright attack on our privacy. Democratic elections were sold to the highest bidder. Ideals evaporated under the pressure of profit margins and expanding power. Those impossibly bright, impossibly young billionaire CEO’s of ten years ago are now testifying in front of Congress. The corporate culture of many tech companies reeks like a frat house on Sunday morning.

Is there a lesson to be learned? I hope so. I think it’s this. Technology won’t do the heavy lifting for us. It is a tool that is subject to our own frailty. It amplifies what it is to be human. It won’t eliminate greed or corruption unless we continually steer it in that direction. 

And I use the term “we” deliberately. We have to hold tech companies to a higher standard. We have to be more discerning of what we agree to. We have to start demanding better treatment and not be willing to trade our rights away with the click of an accept button. 

A lot of what could have been slipped through our fingers in the last 10 years.  It shouldn’t have happened. Not on our watch.