Why Free News is (usually) Bad News

Pretty much everything about the next week will be unpredictable. But whatever happens on Nov. 3, I’m sure there will be much teeth-gnashing and navel-gazing about the state of journalism in the election aftermath.

And there should be. I have written much about the deplorable state of that particular industry. Many, many things need to be fixed. 

For example, let’s talk about the extreme polarization of both the U.S. population and their favored news sources. Last year about this time, the PEW Research Center released a study showing that over 30% of Americans distrust their news sources. 

But what’s more alarming is, when we break this down by Republicans versus Democrats, only 27% of Democrats didn’t trust the news for information about politics or elections. With Republicans, that climbed to a whopping 67%. 

The one news source Republicans do trust? Fox News. Sixty-five percent of them say Fox is reliable. 

And that’s a problem.

Earlier this year, Ad Fontes Media came out with its Media Bias Chart. It charts major news and media channels on two axes: source reliability and political bias. The correlation between bias and reliability is almost perfect. The further a news source is out to the right or left, the less reliable it is.

How does Fox fare? Not well. Ad Fontes separates Fox TV from Fox Online. Fox Online lies on the border between being “reliable for news, but high in analysis/opinion content” and “some reliability issues and/or extremism.” Fox TV falls squarely in the second category.

I’ve written before that media bias is not just a right-wing problem. Outlets like CNN and MSNBC show a significant left-leaning bias. But CNN Online, despite its bias, still falls within the “Most Reliable for News” category. According to Ad Fontes, MSNBC has the same reliability issues as Fox.

The question that has to be asked is “How did we get here?”  And that’s the question tackled head-on in a new book, “Free is Bad,” by John Marshall.

I’ve known Marshall for ages. He has covered a lot of the things I’ve been writing about in this column. 

“It is difficult to get a man to understand something, when his salary depends on his not understanding it.” 

Upton Sinclair

The problem here is one of incentive. Our respective media heads didn’t wake up one morning and say, “You know what we need to be? A lot more biased!” They have walked down that path step by step, driven by the need to find a revenue model that meets their need for profitability. 

When we talk about our news channels, the obvious choice to be profitable is to be supported by ads. And to be supported by ads, you have to be able to target those ads. One of the most effective targeting strategies is to target by political belief, because it comes reliably bundled with a bunch of other beliefs that makes it very easy to predict behaviors. And that makes these ads highly effective in converting prospects.

This is how we got to where we are. But there are all types of ways to prop up your profit through selling ads. Some are pretty open and transparent. Some are less so. And that brings us to a particularly interesting section of Marshall’s book. 

John Marshall is a quant geek at heart. He has been a serial tech entrepreneur — and, in one of those ventures, built a very popular web analytics platform. He also has intimate knowledge of how the sausages are made in the ad-tech business. He knows sketchy advertising practices when he sees them. 

Given all of this, Marshall was able to undertake a fascinating analysis of the ads we see on various news platforms that dovetails nicely with the Ad Fontes chart. 

Marshall created the Ad Shenanigans chart. Basically, he did a forensic analysis of the advertising approaches of various online news platforms. He was looking for those that gathered data about their users, sold traffic to multiple networks, featured clickbait chumboxes and other unsavory practices. Then he ranked them accordingly.

Not surprisingly, there’s a pretty strong correlation between reputable reporting and business ethics. Highly biased and less reputable sites on the Ad Fontes Bias Chart (Breitbart, NewsMax, and Fox News) all can also be found near the top of Marshall’s Ad Shenanigans Chart. Those that do seem to have some ethics when it comes to the types of ads they run also seem to take objective journalism seriously. Case in point, The Guardian in the UK and ProPublica in the U.S.

The one anomaly in the group seems to be CNN. While it does fare relatively well on reputable reporting according to Ad Fontes, CNN appears to be willing to do just about anything to turn a buck. It ranks just a few slots below Fox in terms of “ad shenanigans.”

Marshall also breaks out those platforms that have a mix of paid firewalls and advertising. While there are some culprits in the mix such as the Daily Caller, Slate and the National Review, most sites that have some sort of subscription model seem to be far less likely to fling the gates of their walled gardens open to the ethically challenged advertising hordes. 

All of this drives home Marshall’s message: When it comes to the quality of your news sources, free is bad. As soon as something costs you nothing, you are no longer the customer. You’re the product. Invisible hand market forces are no longer working for you. They are working for the advertiser. And that means they’re working against you if you’re looking for an unbiased, quality news source.

Amazon Prime: Buy Today, Pay Tomorrow?

This column goes live on the most eagerly anticipated day of the year. My neighbor, who has a never-ending parade of delivery vans stopping in front of her door, has it circled on her calendar. At least one of my daughters has been planning for it for several months. Even I, who tends to take a curmudgeonly view of many celebrations, has a soft spot in my heart for this particular one.

No, it’s not the day after Canadian Thanksgiving. This, my friends, is Amazon Prime Day!

Today, in our COVID-clouded reality, the day will likely hit a new peak of “Prime-ness.” Housebound and tired of being bludgeoned to death by WTF news headlines, we will undoubtedly treat ourselves with an unprecedented orgy of one-click shopping. And who can blame us? We can’t go to Disneyland, so leave me alone and let me order that smart home toilet plunger and the matching set of Fawlty Towers tea towels that I’ve been eyeing. 

Of course, me being me, I do think about the consequences of Amazon’s rise to retail dominance. 

I think we’re at a watershed moment in our retail behaviors, and this moment has been driven forward precipitously by the current pandemic. Being locked down has forced many of us to make Amazon our default destination for buying. Speaking solely as a sample of one, I know check Amazon first and then use that as my baseline for comparison shopping. But I do so for purely selfish reasons – buying stuff on Amazon is as convenient as hell!

I don’t think I’m alone. We do seem to love us some Amazon. In a 2018 survey conducted by Recode, respondents said that Amazon had the most positive impact on society out of any major tech company. And that was pre-Pandemic. I suspect this halo effect has only increased since Amazon has become the consumer lifeline for a world forced to stay at home.

As I give into to the siren call of Bezos and Co., I wonder what forces I might be unleashing. What unintended consequences might come home to roost in years hence? Here are a few possibilities. 

The Corporate Conundrum

First of all, let’s not kid ourselves. Amazon is a for-profit corporation. It has shareholders that demand results. The biggest of those shareholders is Jeff Bezos, who is the world’s richest man. 

But amazingly, not all of Amazon’s shareholders are focused on the quarterly financials. Many of them – with an eye to the long game – are demanding that Amazon adopt a more ethical balance sheet.  At the 2019 Annual Shareholder Meeting, a list of 12 resolutions were brought forward to be voted on. The recommendations included zero tolerance for sexual harassment and hate speech, curbing Amazon’s facial recognition technology, addressing climate change and Amazon’s own environmental impact. These last two were supported by a letter signed by 7600 of Amazon’s own employees. 

The result? Amazon strenuously fought every one of them and none were adopted. So, before we get all warm and gooey about how wonderful Amazon is, let’s remember that the people running the joint have made it very clear that they will absolutely put profit before ethics. 

A Dagger in the Heart of Our Communities

For hundreds of years, we have been building a supply chain that was bound by the realities of geography. That supply chain required some type of physical presence within a stone’s throw of where we live. Amazon has broken that chain and we are beginning to feel the impact of that. 

Community shopping districts around the world were being gutted by the “Amazon Effect” even before COVID. In the last 6 months, that dangerous trend has accelerated exponentially. In a commentary from CNBC in 2018, venture capitalist Alan Patricof worried about the social impact of losing our community gathering spots, “This decline has brought a deterioration in places where people congregated, socialized, made friends and were greeted by a friendly face offering an intangible element of belonging to a community.”

The social glue that held us together has been dissolving over the past two decades. Whether you’re a fan of shopping malls or not (I fall into the “not” category) they were at least a common space where you might run into your neighbor. In his book Bowling Alone, from 2000, Harvard political scientist Robert Putnam documented the erosion of social capital in America. We are now 20 years hence and Putnam’s worst case scenario seems quaintly optimistic now. With the loss of our common ground – in the most literal sense – we increasingly retreat to the echo chambers of social media. 

Frictionless Consumerism

This last point is perhaps the most worrying. Amazon has made it stupid simple to buy stuff. They have relentlessly squeezed every last bit of friction out of the path to purchase. That worries me greatly.

If we could rely on a rational marketplace filled with buyers acting in the best homo economicus tradition, then I perhaps rest easier, knowing that there was some type of intelligence driving Adam Smith’s Invisible Hand. But experience has shown that is not the case. Rampant consumerism appears to be one of the three horsemen of the modern apocalypse. And, if this is true, then Amazon has put us squarely in their path. 

This is not to even mention things like Amazon’s emerging monopoly-like dominance in a formerly competitive marketplace, the relentless downward pressure it exerts on wages within its supply chain, the evaporation of jobs outside its supply chain or the privacy considerations of Alexa. 

Still, enjoy your Amazon Prime Day. I’m sure everything will be fine.

Bubbles, Bozos and the Mediocrity Sandwich

I spent most of my professional life inside the high-tech bubble. Having now survived the better part of a decade outside said bubble, I have achieved enough distance to be able to appreciate the lampooning skills of Dan Lyons. If that name doesn’t sound familiar, you may have seen his work. He was the real person behind the Fake Steve Jobs blog. He was also the senior technology editor for Forbes and Newsweek prior to being cut loose in the print media implosion. He later joined the writing staff of Mike Judge’s brilliant HBO series Silicon Valley.

Somewhere in that career arc, Lyons briefly worked at a high tech start up.  From that experience, he wrote Disrupted: My Misadventure in the Start Up Bubble.” It gives new meaning to the phrase “painfully funny.”

After being cast adrift by Forbes, Lyons decided to change his perspective on the Bubble from “outside looking in” to “inside looking out.” He wanted to jump on the bubble band wagon, grab a fistful of options and cash in. And so he joined HubSpot as a content producer for their corporate blog. The story unfolds from there.

One particularly sharp and insightful chapter of the book recalls Steve Job’s “Bozo Explosion”:

“Apple CEO Steve Jobs used to talk about a phenomenon called a ‘bozo explosion,’ by which a company’s mediocre early hires rise up through the ranks and end up running departments. The bozos now must hire other people, and of course they prefer to hire bozos. As Guy Kawasaki, who worked with Jobs at Apple, puts it: ‘B players hire C players, so they can feel superior to them, and C players hire D players.’ “

The Bozo Explosion is somewhat unique to tech start-ups, mainly because of some of the aspects of the culture I talked about in a previous column. But I ran into my own version back in my consulting career. And I ran into it in all kinds of companies. I used to call it the Mediocrity Sandwich.

The Mediocrity Sandwich lives in middle management. I used to find that the people at the C Level of the company were usually pretty smart and competent (that said, I did run across some notable exceptions in my time). I also found that the people found on the customer facing front lines of the company were also pretty smart and – more importantly – very aware of the company’s own issues.

But addressing those issues invariably caused a problem. You have senior executives who were certainly capable of fixing the problems, whatever they might be. And you had front line employees who were painfully aware of what the problems were and motivated to implement solutions. But all the momentum of any real problem-solving initiative used to get sucked out somewhere in the middle of the corporate org chart. The problem was the Mediocrity Sandwich.

You see, I don’t think the Bozo Explosion is so much a pyramid – skinny at the top, broad at the bottom – as it is an inverted U-Shaped curve. I think “bozoism” tends to peak in the middle. You certainly have the progression from A’s to B’s to C’s as you move down from the top executive rungs. But then you have the inverse happening as you move from Middle Management to the front lines. The problem is the attrition of competence as you became absorbed into the organization. It’s the Bozo Explosion in reverse.

I usually found there was enough breathing room for competence to survive at the entry level in the organization. There were enough degrees of separation between the front line and the from the bozos in middle management. But as you started to climb the corporate ladder, you kept getting closer to the bozos. Your degree of job frustration began to climb as they had more influence over your day-to-day work. Truly competent players bailed and moved on to a less bozo-infested environment. Those that remained either were born bozos or had “bozo”ness thrust upon them. Either way, as you climbed towards middle management, the bozo factor climbed in lock step. The result? A bell curve of bozos centered in the middle between the C-Level and the front lines.

This creates a poisonous outlook for the long-term prospects of a company. Eventually, the C level executive will age out of their jobs. But who will replace them? The internal farm team is a bunch of bozos. You can recruit from outside, but then the incoming talent inherits a Mediocrity Sandwich. The company begins to rot from within.

For companies to truly change, you have to root out the bozo-rot, but this is easier said than done. If there is one single thing that bozos are good at, it is bozo butt-covering.

What Happens When A Black Swan Beats Up Your Brand

I’m guessing the word Corona brings many things to your mind right now — and a glass full of a ice-cold beer may not be one of them. A brand that once made us think of warm, sunny beaches and Mexican vacations on the Mayan Riviera now is mentally linked to a global health crisis. Sometimes the branding gods smile on you in their serendipity, and sometimes they piss in your cornflakes. For Grupo Modelo, the makers of Corona beer, the latter is most definitely the case.

As MediaPost Editor Joe Mandese highlighted in a post last week, almost 40% of American beer drinkers in a recent poll would not buy Corona under any circumstances. Fifteen percent of regular Corona drinkers would no longer order it in public. No matter how you slice those numbers, that does not bode well for the U.S.’s top-selling imported drink.

It remains to be seen what effect the emerging pandemic will have on the almost 100-year-old brand. Obviously, Grupo Modelo, the owners of the brand, are refuting that there is any permanent damage. But then, what else would you expect them to say?  There’s a lot of beer sitting on shelves around the world that is waiting to be drunk. It’s just unfortunate it has the same name as a health crisis that so far is the biggest story of this decade.

This is probably not what the marketing spin doctors at Grupo Modelo want to hear, but a similar thing happened about 40 years ago.  Here is the story of another brand whose name got linked to the biggest health tragedy of the 1980s.

In 1946 the Carlay Company of Chicago registered a trademark for a “reducing plan vitamin and mineral candy” that had been in commercial use for almost a decade. The company claimed that users of the new “vitamin” could “lose up to 10 pounds in 5 days, without dieting or exercising.” The Federal Trade Commission soon called bullshit on that claim, causing the Carlay Company to strip it from its marketing in 1944.

Marketing being marketing, it wasn’t the vitamins in this “vitamin” that allegedly caused the pounds to melt away. In the beginning, it was something that chemists call benzocaine. That’s a topical anesthetic you’ll also find it in over-the-counter products like Orajel. Basically, benzocaine numbed the tongue. The theory was that a tongue that couldn’t taste anything would be less likely to crave food.

The active ingredient was later changed to phenylpropanolamine, which was also used as a decongestant in cold medications and to control urinary incontinence in dogs. In the ‘60s and ’70s, it became a common ingredient in many diet pills. Then it was discovered to cause strokes in young women.

The Carlay Company eventually became part of the Campana Corporation, which in turn was sold to Purex. The product morphed from a vitamin to a diet candy and was sold in multiple flavors, including chocolate, chocolate mint, butterscotch and caramel. If you remember Kraft caramels — little brown cubes packaged in clear cellophane — you have a good idea what these diet candies looked like.

Despite the shaky claims and dubious ingredients, the diet candies became quite popular. I remember my mother, who had a lifelong struggle with her weight, usually had a box of them in the cupboard when I was growing up. Sale hit their peak in the ‘70s and early ‘80s. There were TV ads and celebrity endorsers — including Bob Hope and Tyrone Power — lined up to hawk them.

Then, in 1981, the Centers for Disease Control and Prevention (CDC) published a report about five previously healthy men who all became infected with pneumocystis pneumonia. The odd thing was that this type of pneumonia is almost never found in healthy people. There was another odd thing. All five men were gay. In 1982, the CDC gave a name to this new disease: AIDS.

Of all the ways AIDS changed our world in the 1980s, one was particularly relevant to the marketers of those diet candies, which just happened to be named Ayds.

You can see the problem.

Ayds soldiered on until 1988, despite sales that dropped 50%. The company tried to find a new name, including Diet Ayds and Aydslim in the U.K. It was too little, too late. The candies were eventually withdrawn from the market.

Does this foretell the fate of Corona beer? Perhaps not. AIDS has been part of our public consciousness for four decades. A product with a similar sounding name didn’t stand a chance. We can hope that coronavirus will not have the same longevity. And the official name of the outbreak has now been changed to Covid19. For both these reasons, Corona — the beer — might be able to ride out the storm caused by corona, the virus.

But you can bet that there are some pretty uncomfortable meetings being held right now in the marketing department boardroom at Grupo Modelo.

What is the Moral Responsibility of a Platform?

The owner of the AirBnB home in Orinda, California suspected something was up. The woman who wanted to rent the house for Halloween night swore it wasn’t for a party. She said it was for a family reunion that had to relocate at the last minute because of the wildfire smoke coming from the Kincade fire, 85 miles north of Orinda. The owners reluctantly agreed to rent the home for one night.

Shortly after 9 pm, the neighbors called the owner, complaining of a party raging next door. The owners verified this through their doorbell camera. The police were sent. Over a 100 people who had responded to a post on social media were packed into the million-dollar home. At 10:45 pm, with no warning, things turned deadly. Gunshots were fired. Four men in their twenties were killed immediately. A 19-year-old female died the next day. Several others were injured.

Here is my question. Is AirBnB partly to blame for this?

This is a prickly question. And it’s one to extends to any one of the platforms that are highly disruptive. Technical disruption is a race against our need for order and predictability. When the status quo is upended, there is a progression towards a new civility that takes time, but technology is outstripping it. Platforms create new opportunities – for the best of us and the worst.

The simple fact is that technology always unleashes ethical ramifications – the more disruptive the technology, the more serious the ethical considerations. The other tricky bit is that some ethical considerations can be foreseen..but others cannot.

I have often said that our world is becoming a more complex place. Technology is multiplying this complexity at an ever increasing pace. And the more complex things are, the more difficult they are to predict.

As Homo Deus author Yuval Noah Harari said, because of the pace of technology, our world is becoming more complex, so it is becoming increasingly difficult to predict what the future might hold.

“Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently, we are less and less able to make sense of the present or forecast the future.”

This acceleration is also eliminating the gap between cause and consequence. We used to have the luxury of time to digest disruption. But now, the gap between the introduction of the technology and the ripples of the ramifications is shrinking.

Think about the ethical dilemmas and social implications introduced by the invention of the printing press. Thanks to the introduction of this technology, literacy started creeping down through social classes and it totally disrupted entire established hierarchies, unleashed ideological revolutions and ushered in tsunamis of social change. But the cause and consequences were separated by decades and even centuries. Should Guttenberg be held responsible for the French Revolution? This seems laughable, but only because almost three and a half centuries lie between the two.

Like the printing press eventually proved, technology typically dismantles vertical hierarchies. It democratizes capabilities – spreading them down to new users and – in the process – making the previously impossible possible. I have always said that technology is simply a tool, albeit an often disruptive one. It doesn’t change human behaviors. It enables them. But here we have an interesting phenomenon. If technology pushes capabilities down to more people and simultaneously frees those users from the restraint of a verticalized governing structure, you have a highly disruptive sociological experiment happening in real time with a vast sample of subjects.

Most things about human nature are governed by a normal distribution curve – also known as a bell curve. Behaviors expressed through new technologies are no exception. When you rapidly expand access to a capability you are going to have a spectrum of ethical attitudes interacting with it. At one end of the spectrum, you will have bad actors. You will find these actors on both sides of a market expanding at roughly the same rate as our universe. And those actors will do awful things with the technology.

Our innate sense of fairness seeks a simple line between cause and effect. If shootings happen at an AirBnB party house, then AirBnB should be held at least partly responsible. Right?

I’m not so sure. That’s the simple answer, but after giving it much thought, I don’t believe it’s the right one.  Like my previous example of the printing press, I think trying to saddle a new technology with the unintentional and unforseen social disruption unleashed by that technology is overly myopic. It’s an attitude that will halt technological progress in its tracks.

I fervently believe new technologies should be designed with humanitarian principles in mind. They should elevate humans, strive for neutrality, be impartial and foster independence. In the real world, they should do all this in a framework that allows for profitability. It is this, and only this, that is reasonable to ask from any new technology. To try to ask it to foresee every potential negative outcome or to retroactively hold it accountable when those outcomes do eventually occur is both unreasonable and unrealistic.

Disruptive technologies will always find the loopholes in our social fabric. They will make us aware of the vulnerabilities in our legislation and governance. If there is an answer to be found here, it is to be found in ourselves. We need to take accountability for the consequences of the technologies we adopt. We need to vote for governments that are committed to keeping pace with disruption through timely and effective governance.

Like it or not, the technology we have created and adopted has propelled us into a new era of complexity and unpredictability. We are flying into uncharted territory by the seat of our pants here. And before we rush to point fingers we should remember – we’re the ones that asked for it.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

Why Good Tech Companies Keep Being Evil

You’d think we’d have learned by now. But somehow it still comes as a shock to us when tech companies are exposed as having no moral compass.

Slate recently released what it called the “Evil List”  of 30 tech companies compiled through a ballot sent out to journalists, scholars, analysts, advocates and others. Slate asked them which companies were doing business in the way that troubled them most. Spoiler alert: Amazon, Facebook and Google topped the list.  But they weren’t alone. Rounding out the top 10, the list of culprits included Twitter, Apple, Microsoft and Uber.

Which begs the question: Are tech companies inherently evil — like, say a Monsanto or Phillip Morris — or is there something about tech that positively correlates with “evilness”?

I suspect it’s the second of these.  I don’t believe Silicon Valley is full of fundamentally evil geniuses, but doing business as usual at a successful tech firm means there will be a number of elemental aspects of the culture that take a company down the path to being evil.

Cultism, Loyalism and Self-Selection Bias

A successful tech company is a belief-driven meat grinder that sucks in raw, naïve talent on one end and spits out exhausted and disillusioned husks on the other. To survive in between, you’d better get with the program.

The HR dynamics of a tech startup have been called a meritocracy, where intellectual prowess is the only currency.

But that’s not quite right. Yes, you have to be smart, but it’s more important that you’re loyal. Despite their brilliance, heretics are weeded out and summarily turfed, optionless in more ways than one. A rigidly molded group-think mindset takes over the recruitment process, leading to an intellectually homogeneous monolith.

To be fair, high growth startups need this type of mental cohesion. As blogger Paras Chopra said in a post entitled “Why startups need to be cult-like, “The reason startups should aim to be like cults is because communication is impossible between people with different values.” You can’t go from zero to 100 without this sharing of values.

But necessary or not, this doesn’t change the fact that your average tech star up is a cult, with all the same ideological underpinnings. And the more cult-like a culture, the less likely it is that it will take the time for a little ethical navel-gazing.

A Different Definition of Problem Solving

When all you have is a hammer, everything looks like a nail. And for the engineer, the hammer that fixes everything is technology. But, as academic researchers Emanuel Moss and Jacob Metcalf discovered, this brand of technical solutionism can lead to a corporate environment where ethical problems are ignored because they are open-ended, intractable questions. In a previous column I referred to them as “wicked problems.”

As Moss and Metcalf found, “Organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes ‘intractable, like it’s too big of a problem to tackle.’”

If you take this to the extreme, you get the Cambridge Analytica example, where programmer Christopher Wylie was so focused on the technical aspects of the platform he was building that he lost sight of the ethical monster he was unleashing.

A Question of Leadership

Of course, every cult needs a charismatic leader, and this is abundantly true for tech-based companies. Hubris is a commodity not in short supply among the C-level execs of tech.

It’s not that they’re assholes (well, ethical assholes anyway). It’s just that they’re, umm, highly focused and instantly dismissive of any viewpoint that’s not the same as their own. It’s the same issue I mentioned before about the pitfalls of expertise — but on steroids.

I suspect that if you did an ethical inventory of Mark Zuckerberg, Jeff Bezos, Larry Page, Sergey Brin, Travis Kalanik, Reid Hoffman and the rest, you’d find that — on the whole — they’re not horrible people. It’s just that they have a very specific definition of ethics as it pertains to their company. Anything that falls outside those narrowly defined boundaries is either dismissed or “handled” so it doesn’t get in the way of the corporate mission.

Speaking of corporate missions, leaders and their acolytes often are unaware — often intentionally — of the nuances of unintended consequences. Most tech companies develop platforms that allow disruptive new market-based ecosystems to evolve on their technological foundations. Disruption always unleashes unintended social consequences. When these inevitably happen, tech companies generally handle them one of three ways:

  1. Ignore them, and if that fails…
  2. Deny responsibility, and if that fails…
  3. Briefly apologize, do nothing, and then return to Step 1.

There is a weird type of idol worship in tech. The person atop the org chart is more than an executive. They are corporate gods — and those that dare to be disagreeable are quickly weeded out as heretics. This helps explain why Facebook can be pilloried for attacks on personal privacy and questionable design ethics, yet Mark Zuckerberg still snags a 92% CEO approval rating on Glassdoor.com.These fundamental characteristics help explain why tech companies seem to consistently stumble over to the dark side. But there’s an elephant in the room we haven’t talked about. Almost without exception, tech business models encourage evil behavior. Let’s hold that thought for a future discussion.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

The Hidden Agenda Behind Zuckerberg’s “Meaningful Interactions”

It probably started with a good intention. Facebook – aka Mark Zuckerberg – wanted to encourage more “Meaningful Interactions”. And so, early last year, Facebook engineers started making some significant changes to the algorithm that determined what you saw in your News Feed. Here are some excerpts from Zuck’s post to that effect:

“The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos — even if they’re entertaining or informative — may not be as good.”

That makes sense, right? It sounds logical. Zuckerberg went on to say how they were changing Facebook’s algorithm to encourage more “Meaningful Interactions.”

“The first changes you’ll see will be in News Feed, where you can expect to see more from your friends, family and groups.

As we roll this out, you’ll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard — it should encourage meaningful interactions between people.”


Let’s fast-forward almost two years and we now see the outcome of that good intention…an ideological landscape with a huge chasm where the middle ground used to be.

The problem is that Facebook’s algorithm naturally favors content from like-minded people. And surprisingly, it doesn’t take a very high degree of ideological homogeneity to create a highly polarized landscape. This shouldn’t have come as a surprise. American Economist Thomas Schelling showed us how easy it was for segregation to happen almost 50 years ago.

The Schelling Model of Segregation was created to demonstrate why racial segregation was such a chronic problem in the U.S., even given repeated efforts to desegregate. The model showed that even when we’re pretty open minded about who our neighbors are, we will still tend to self-segregate over time.

The model works like this. A grid represents a population with two different types of agents: X and O. The square that the agent is in represents where they live. If the agent is satisfied, they will stay put. If they aren’t satisfied, they will move to a new location. The variable here is the level of satisfaction determined by what percentage of their immediate neighbours are the same type of agent as they are. For example, the level of satisfaction might be set at 50%; where the X agent needs at least 50% of its neighbours to also be of type X. (If you want to try the model firsthand, Frank McCown, a Computer Science professor at Harding University, created an online version.)

The most surprising thing that comes out of the model is that this threshold of satisfaction doesn’t have to be set very high at all for extensive segregation to happen over time. You start to see significant “clumping” of agent types at percentages as low as 25%. At 40% and higher, you see sharp divides between the X and O communities. Remember, even at 40%, that means that Agent X only wants 40% of their neighbours to also be of the X persuasion. They’re okay being surrounded by up to 60% Os. That is much more open-minded than most human agents I know.

Now, let’s move the Schelling Model to Facebook. We know from the model that even pretty open-minded people will physically segregate themselves over time. The difference is that on Facebook, they don’t move to a new part of the grid, they just hit the “unfollow” button. And the segregation isn’t physical – it’s ideological.

This natural behavior is then accelerated by the Facebook “Meaningful Encounter” Algorithm which filters on the basis of people you have connected with, setting in motion an ever-tightening spiral that eventually restricts your feed to a very narrow ideological horizon. The resulting cluster then becomes a segment used for ad targeting. We can quickly see how Facebook both intentionally built these very homogenous clusters by changing their algorithm and then profits from them by providing advertisers the tools to micro target them.

Finally, after doing all this, Facebook absolves themselves of any responsibility to ensure subversive and blatantly false messaging isn’t delivered to these ideologically vulnerable clusters. It’s no wonder comedian Sascha Baron Cohen just took Zuck to task, saying “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’”. 

In rereading Mark Zuckerberg’s post from two years ago, you can’t help but start reading between the lines. First of all, there is mounting evidence that disproves his contention that meaningful social media encounters help your well-being. It appears that quitting Facebook entirely is much better for you.

And secondly, I suspect that – just like his defence of running false and malicious advertising by citing free speech – Zuck has an not-so-hidden agenda here. I’m sure Zuckerberg and his Facebook engineers weren’t oblivious to the fact that their changes to the algorithm would result in nicely segmented psychographic clusters that would be like catnip to advertisers – especially political advertisers. They were consolidating exactly the same vulnerabilities that were exploited by Cambridge Analytica.

They were building a platform that was perfectly suited to subvert democracy.

Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.