A New Definition of Social

I am an introvert. My wife is an extrovert. Both of us have been self-isolating for about 5 weeks now.  I don’t know if our experiences are representative of introverts and extroverts as a group, but my sample size has – by necessity – been reduced to a “n” of 2. Our respective concepts of what it means to be social have been disruptively redefined, but in very different ways.

The Extro-Version

You’ve probably heard of Dunbar’s Number. It was proposed by anthropologist Robin Dunbar. It’s the “suggested cognitive limit to the number of people with whom one can maintain stable social relationships.” The number, according to Dunbar, is about 150. But that number is not an absolute. it’s a theoretical limit. Some of us can juggle way more social connections than others.

My wife’s EQ (emotional quotient) is off the charts. She has a need to stay emotionally connected to a staggering number of people. Even in normal times, she probably “checks in” with dozens of people every week. Before COVID-19, this was done face-to-face whenever possible.

Now, her empathetic heart feels an even greater need to make sure everyone is doing okay. But she has to do it through socially distanced channels. She uses text messaging a lot. But she also makes at least a few phone calls every day for those in her network who are not part of the SMS or social media universe.

She has begun using Zoom to coordinate virtual get-togethers of a number of her friends. Many in this circle are also extroverts. A fair number of them are – like my wife – Italian. You can hear them recharging their social batteries as the energy and volume escalates. It’s not cappuccino and biscotti but they are making do with what they’ve got.

Whatever the channel, it has been essential for my wife to maintain this continuum of connection.

The Intro-Version

There are memes circulating that paint the false picture that the time has finally come for us introverts. “I’ve been practicing for this my entire life,” says one. They consistently say that life in lockdown is much harder for extroverts than introverts. They even hint that we should be in introvert’s heaven. They are wrong. I am not having the time of my life.

I’m not alone. Other introverts are having trouble adjusting to a social agenda being forced upon them by their self-isolated extrovert friends and colleagues. We introverts seldom get to write the rules of social acceptability, even in a global pandemic.

If you type “Are introverts more likely” into Google, it will suggest the following ways to complete that sentence: “to be depressed”, “to be single”, “to have anxiety”, “to be alcoholic”, and “to be psychopaths”. The world is not built for introverts.

Understanding introversion vs extroversion is to understand social energy. Unlike my wife for whom social interactions act as a source of renewal, for me they are a depletion of energy. If I’m going to make the effort, it better be worth my while. A non-introvert can’t understand that. It’s often interpreted as aloofness, boredom or just being rude. It’s none of these. It’s just our batteries being run down.

Speaking for myself, I don’t think most introverts are antisocial. We’re just “differently” social. We need connections the same as extroverts. But those connections are of a certain kind. It’s true that introverts are not good at small talk. But under the right circumstances, we do love to talk. Those circumstances are just more challenging in the current situation.

Take Zoom for instance. My wife, the extrovert, and myself, the introvert, have done some Zoom meetings side by side. I have noticed a distinct difference in how we Zoom. But before I talk about that, let me set a comparative to a more typical example of an introvert’s version of hell: the dreaded neighborhood house party.

As an introvert in this scenario, I would be constantly reading body language and non-verbal cues to see if there was an opportunity to reluctantly crowbar my way into a conversation. I would only do so if the topic interested me. Even then, I would be subconsciously monitoring my audience to see if they looked bored. On the slightest sign of disinterest, I would awkwardly wind down the conversation and retreat to my corner.

It’s not that I don’t like to talk. But I much prefer sidebar one-on-one conversations. I don’t do well in environments where there is too much going on. In those scenarios, introverts tend to clam up and just listen.

Now, consider a Zoom “Happy Hour” with a number of other people. All of that non-verbal bandwidth we Introverts rely on to pick and choose where we expend our limited social energy is gone.   Although Zoom adds a video feed, it’s a very low fidelity substitute for an in-the-flesh interaction.

With all this mental picking and choosing happening in the background, you can understand why introverts are slow to jump into the conversational queue and, when we finally do, we find that someone else (probably an extrovert) has started talking first. I’m constantly being asked, “Did you say something Gord?”, at which point everyone stops talking and looks at my little Zoom cubicle, waiting for me to talk. That, my friends, is an introvert’s nightmare.

Finally, I Get the Last Word

Interestingly, neither my wife nor I are using Facebook much for connection. She has joined a few Facebook groups, one of which is a fan club for our provincial health officer, Dr. Bonnie Henry. Dr. Henry has become the most beloved person in B.C.

And I’m doing what I always tell everyone else not to do; follow my Facebook newsfeed and go into self-isolated paroxysms of rage about the Pan-dumb-ic and the battle between science and stupidity.

There is one social sacrifice that both my wife and I agree on. The thing we miss most is the ability to hug those we love.

Quant vs Qual in the time of Crisis

Digesting reality is becoming more and more difficult. I often find myself gagging on it. Last Friday was a good example. I have been limiting my news intact for my own sanity, but Friday morning I went down the rabbit hole. Truth be told, I started doing some research for the post I was intending to write (which I will probably get to next week) and I was soon overwhelmed with what I was reading.

I’m beginning to suspect that we’re getting an extra dump of frightening news on Fridays as officials realize that it’s more difficult to enforce social distancing on weekends. Whether this is the case or not, I found my chest tightening from anxiety. My hands got shaky as I found myself clicking on frightening link after frightening link. Predictions scared the shit out of me. I was worried for my community and country. I was worried for myself. But most of all, I was worried for my kids, my wife, my dad, my in-laws and my family.

Fear and anxiety swamped my normally rational side. Intellect gave way to despair. That’s not a good mode for me. I have to run cool – I need to be rational to function. Emotions mentally shut me down.

So I retreated to the numbers. My single best source throughout this has been the posts from Tomas Pueyo – the VP of Growth at Course Hero. They are exhaustively researched statistical analyses and “what-if” models assembled by an ad-hoc team of rockstar quants. On his first post on March 10 –  “Coronavirus: Why You Must Act Now” – Pueyo and his team nailed it. If everyone listened and followed his advice, we wouldn’t be where we are now. Similarly, his post on March 19 – “Coronavirus: The Hammer and The Dance” gave a tough but rational prescription to follow. His latest – “Coronavirus: Out of Many, One” – drills down on a state-by-state analysis of COVID in the US.

I’m not going to blow smoke here. These are tough numbers to read. Even the best-case scenarios would have been impossible to imagine just a few weeks ago. But the worst-case scenarios are exponentially more frightening. And if you – like me – needs to retreat to ration in order to keep functioning, this is the best rationale I’ve found for dealing with COVID 19. It’s not what we want to hear, but it’s what we must listen to.

In my marketing life, I always encouraged a healthy mix of both quantitative and qualitative perspectives in trying to understand what is real. I’ve said in the past: “Quantitative is watching the dashboard while you drive. Qualitative is looking out the windshield.”

I often find that marketers tend to focus too much on the numbers and not enough on the people on the other side of those numbers. We were an industry deluged with data and it made us less human.

Ironically, I now find myself on the other side of that argument. We have to understand that even our most trustworthy media sources are going to be telling us the stories that have the most impact on us. Whether you turn to Fox or CNN as your news source, we would be getting soundbites out of context that are – by design – sensational in nature. They may differ in their editorial slants, but – right or left – we can’t consider them representational of reality. They are the outliers.

Being human, we can’t help but apply these to our current reality. It’s called availability bias. It the simplest terms possible, it means that those things that are most in our face become our understanding of any given situation.

In normal times, these individual examples can heighten our humanity and make us a little less numb. They remind us of the relevance of the individual experience– the importance of every life and the tragedy of even one person suffering.

“If only one man dies of hunger, that is a tragedy.
If millions die, that’s only a statistic.”

– Joseph Stalin

Normally, I would never dream of quoting Joe Stalin in a post. But these are not normal times. And the fact is, Stalin was right. when we start looking at statistics and mathematical modelling, our brain works differently. It forces us to use a more rational cognitive mechanism; one less likely to be influenced by emotion. And in responding to a crisis, this is exactly the type of reasoning required.

This is a time unlike anything any of us has experienced. In times like this, actions should be based on the most accurate and scientific information possible. We need the cold, hard logic of math as a way to not become swamped by the wave of our own emotions. In order to make really difficult decisions for the greater good, we need to distance ourselves from our own little bubbles of reality, especially when that reality is made up of non-representative examples streamed to us through media channels.

Whipped Into a Frenzy

Once again, we’re in unprecedented territory. According to the CDC – COVID-19 is the first global pandemic since the 2009 H1N1 outbreak. While Facebook was around in 2009, it certainly wasn’t as pervasive or impactful as it is today. Neither – for that matter – was H1N1 when compared to COVID-19. That would make COVID-19 the first true pandemic in the age of social media.

While we’re tallying the rapidly mounting human and economic costs of the pandemic on a day-by-day basis, there is a third type of damage to consider. There will be a cognitive cost to this as well.

So let’s begin by unpacking the psychology of a pandemic. Then we’ll add the social media lens to that.

Emotional Contagion aka “The Toilet Paper Syndrome”

Do you have toilet paper at your local store? Me neither. Why?

The short answer is that there is no rational answer. There is no disruption in the supply chain of toilet paper. If you were inclined to stock up on something to battle COVID-19, hand sanitizer would be a much better choice.  Search as you might, there is no logical reason why people should be pulling toilet paper by the pallet full out of their local Costco.

There is really only one explanation; panic is contagious. It’s called emotional contagion. And there is an evolutionary explanation for it. We evolved as herd animals and when our threats came from the environment around us, it made sense to panic when you saw your neighbor panicking. Those that were on the flanks of the herd acted as an early warning system for the rest. When you saw panic close to you, the odds were very good that you were about to be eaten, trampled or buried under a rockslide. We’re hardwired to live by the principle of “Monkey see, monkey do.”

Here’s the other thing about emotional contagion. It doesn’t work very well if you have to take time to think about it. Panicked responses to threats from your environment will only save your life if they happen instantly. Natural selection has ensured they bypass the slower and more rational processing loops of our brain.

But now let’s apply the social media lens to this. Before modern communication tools were invented, emotional contagion was limited by the constraints of physical proximity. It was the original application of social distancing. Emotions could spread to a social node linked by physical proximity, but it would seldom jump across ties to another node that was separated by distance.

Then came Facebook, a platform perfectly suited to emotional contagion. Through it, emotionally charged messages can spread like wildfire regardless of where the recipients might be – creating cascades of panic across all nodes in a social network.

Now we have cascades of panic causing – by definition – irrational responses. And that’s dangerous. As Wharton Management professor Sigal Barsade said in a recent podcast, “I would argue that emotional contagion, unless we get a hold on it, is going to greatly amplify the damage caused by COVID-19”

Why We Need to Keep Calm and Carry On

Keep Calm and Carry On – the famous slogan from World War II Britain – is more than just a platitude that looks good on a t-shirt. It’s a sound psychological strategy for survival, especially when faced with threats in a complex environment. We need to think with our whole brain and we can only do that when we’re not panicking.

Again, Dr. Barsade cautions us “One of the things we also know from the research literature is that negative emotions, particularly fear and anxiety, cause us to become very rigid in our decision-making. We’re not creative. We’re not as analytical, so we actually make worse decisions.”

Let’s again consider the Facebook Factor (in this case, Facebook being my proxy for all social media). Negative emotional messages driven by fear gets clicked and shared a lot on social media. Unfortunately, much of that messaging is – at best – factually incomplete or – at worst – a complete fabrication. A 2018 study from MIT showed that false news spreads six times faster on social media than factual information.

It gets worse. According to Pew Research, one in five Americans said that social media is their preferred source for news, surpassing newspapers. In those 18 -to 29, it was the number one source. When you consider the inherent flaws in the methodology of a voluntary questionnaire, you can bet the actual number is a lot higher.

Who Can You Trust?

Let’s assume we can stay calm. Let’s further assume we can remain rational. In order to make rational decisions, you need factual information.

Before 2016, you could generally rely on government sources to provide trustworthy information. But that was then. Now, we live in the reality distortion field that daily spews forth fabricated fiction from the Twitter account of Donald. J. Trump, aka the President of the United States.

The intentional manipulation of the truth by those we should trust has a crippling effect on our ability to respond as a cohesive and committed community. As recently as just a week and a half ago, a poll found that Democrats were twice as likely as Republicans to say that COVID-19 posed an imminent threat to the U.S. By logical extension, that means that Republicans were half as likely to do something to stop the spread of the disease.

My Plan for the Pandemic

Obviously, we live in a world of social media. COVID-19 or not, there is no going back. And while I have no idea what will happen regarding the pandemic, I do have a pretty good guess how this will play out on social media. Our behaviours will be amplified through social media and there will be a bell curve of those behaviors stretching from assholes to angels. We will see the best of ourselves – and the worst – magnified through the social media lens.

Given that, here’s what I’m planning to do. One I already mentioned. I’m going to keep calm. I’m going to do my damnedest to make calm, rational decisions based on trusted information (i.e. not from social media or the President of the United States) to protect myself, my loved ones and anyone else I can.

The other plan? I’m going to reread everything from Nassam Nicholas Taleb. This is a good time for all of us to brush up on our understanding of robustness and antifragility.

What is the Moral Responsibility of a Platform?

The owner of the AirBnB home in Orinda, California suspected something was up. The woman who wanted to rent the house for Halloween night swore it wasn’t for a party. She said it was for a family reunion that had to relocate at the last minute because of the wildfire smoke coming from the Kincade fire, 85 miles north of Orinda. The owners reluctantly agreed to rent the home for one night.

Shortly after 9 pm, the neighbors called the owner, complaining of a party raging next door. The owners verified this through their doorbell camera. The police were sent. Over a 100 people who had responded to a post on social media were packed into the million-dollar home. At 10:45 pm, with no warning, things turned deadly. Gunshots were fired. Four men in their twenties were killed immediately. A 19-year-old female died the next day. Several others were injured.

Here is my question. Is AirBnB partly to blame for this?

This is a prickly question. And it’s one to extends to any one of the platforms that are highly disruptive. Technical disruption is a race against our need for order and predictability. When the status quo is upended, there is a progression towards a new civility that takes time, but technology is outstripping it. Platforms create new opportunities – for the best of us and the worst.

The simple fact is that technology always unleashes ethical ramifications – the more disruptive the technology, the more serious the ethical considerations. The other tricky bit is that some ethical considerations can be foreseen..but others cannot.

I have often said that our world is becoming a more complex place. Technology is multiplying this complexity at an ever increasing pace. And the more complex things are, the more difficult they are to predict.

As Homo Deus author Yuval Noah Harari said, because of the pace of technology, our world is becoming more complex, so it is becoming increasingly difficult to predict what the future might hold.

“Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently, we are less and less able to make sense of the present or forecast the future.”

This acceleration is also eliminating the gap between cause and consequence. We used to have the luxury of time to digest disruption. But now, the gap between the introduction of the technology and the ripples of the ramifications is shrinking.

Think about the ethical dilemmas and social implications introduced by the invention of the printing press. Thanks to the introduction of this technology, literacy started creeping down through social classes and it totally disrupted entire established hierarchies, unleashed ideological revolutions and ushered in tsunamis of social change. But the cause and consequences were separated by decades and even centuries. Should Guttenberg be held responsible for the French Revolution? This seems laughable, but only because almost three and a half centuries lie between the two.

Like the printing press eventually proved, technology typically dismantles vertical hierarchies. It democratizes capabilities – spreading them down to new users and – in the process – making the previously impossible possible. I have always said that technology is simply a tool, albeit an often disruptive one. It doesn’t change human behaviors. It enables them. But here we have an interesting phenomenon. If technology pushes capabilities down to more people and simultaneously frees those users from the restraint of a verticalized governing structure, you have a highly disruptive sociological experiment happening in real time with a vast sample of subjects.

Most things about human nature are governed by a normal distribution curve – also known as a bell curve. Behaviors expressed through new technologies are no exception. When you rapidly expand access to a capability you are going to have a spectrum of ethical attitudes interacting with it. At one end of the spectrum, you will have bad actors. You will find these actors on both sides of a market expanding at roughly the same rate as our universe. And those actors will do awful things with the technology.

Our innate sense of fairness seeks a simple line between cause and effect. If shootings happen at an AirBnB party house, then AirBnB should be held at least partly responsible. Right?

I’m not so sure. That’s the simple answer, but after giving it much thought, I don’t believe it’s the right one.  Like my previous example of the printing press, I think trying to saddle a new technology with the unintentional and unforseen social disruption unleashed by that technology is overly myopic. It’s an attitude that will halt technological progress in its tracks.

I fervently believe new technologies should be designed with humanitarian principles in mind. They should elevate humans, strive for neutrality, be impartial and foster independence. In the real world, they should do all this in a framework that allows for profitability. It is this, and only this, that is reasonable to ask from any new technology. To try to ask it to foresee every potential negative outcome or to retroactively hold it accountable when those outcomes do eventually occur is both unreasonable and unrealistic.

Disruptive technologies will always find the loopholes in our social fabric. They will make us aware of the vulnerabilities in our legislation and governance. If there is an answer to be found here, it is to be found in ourselves. We need to take accountability for the consequences of the technologies we adopt. We need to vote for governments that are committed to keeping pace with disruption through timely and effective governance.

Like it or not, the technology we have created and adopted has propelled us into a new era of complexity and unpredictability. We are flying into uncharted territory by the seat of our pants here. And before we rush to point fingers we should remember – we’re the ones that asked for it.

The Saddest Part about Sadfishing

There’s a certain kind of post I’ve always felt uncomfortable with when I see it on Facebook. You know the ones I’m talking about — where someone volunteers excruciatingly personal information about their failing relationships, their job dissatisfaction, their struggles with personal demons. These posts make me squirm.

Part of that feeling is that, being of British descent, I deal with emotions the same way the main character’s parents are dealt with in the first 15 minutes of any Disney movie: Dispose of them quickly, so we can get on with the business at hand.

I also suspect this ultra-personal sharing  is happening in the wrong forum. So today, I’m trying to put an empirical finger on my gut feelings of unease about this particular topic.

After a little research, I found there’s a name for this kind of sharing: sadfishing. According to Wikipedia, “Sadfishing is the act of making exaggerated claims about one’s emotional problems to generate sympathy. The name is a variation on ‘catfishing.’ Sadfishing is a common reaction for someone going through a hard time, or pretending to be going through a hard time.”

My cynicism towards these posts probably sounds unnecessarily harsh. It goes against our empathetic grain. These are people who are just calling out for help. And one of the biggest issues with mental illness is the social stigma attached to it. Isn’t having the courage to reach out for help through any channel available — even social media — a good thing?

I do believe asking for help is undeniably a good thing. I wish I myself was better able to do that. It’s Facebook I have the problem with. Actually, I have a few problems with it.

It’s Complicated

Problem #1: Even if a post is a genuine request for help, the poster may not get the type of response he or she needs.

Mental Illness, personal grief and major bumps on our life’s journey are all complicated problems — and social media is a horrible place to deal with complicated problems. It’s far too shallow to contain the breadth and depth of personal adversity.

Many read a gut-wrenching, soul-scorching post (genuine or not), then leave a heart or a sad face, and move on. Within the paper-thin social protocols of Facebook, this is an acceptable response. And it’s acceptable because we have no skin in the game. That brings us to problem #2.

Empathy is Wired to Work Face-to-Face

Our humanness works best in proximity. It’s the way we’re wired.

Let’s assume someone truly needs help. If you’re physically with them and you care about them, things are going to get real very quickly. It will be a connection that happens at all possible levels and through all senses.

This will require, at a minimum, hand-holding and, more likely, hugs, tears and a staggering personal commitment  to help this person. It is not something taken or given lightly. It can be life-changing on both sides.

You can’t do it at arm’s length. And you sure as hell can’t do it through a Facebook reply.

The Post That Cried Wolf

But the biggest issue I have is that social media takes a truly genuine and admirable instinct, the simple act of helping someone, and turns it into just another example of fake news.

Not every plea for help on Facebook is exaggerated just for the sake of gaining attention, but some of them are.

Again, Facebook tends to take the less admirable parts of our character and amplify them throughout our network. So, if you tend to be narcissistic, you’re more apt to sadfish. If you have someone you know who continually reaches out through Facebook with uncomfortably personal posts of their struggles, it may be a sign of a deeper personality disorder, as noted in this post on The Conversation.

This phenomenon can create a kind of social numbness that could mask genuine requests for help. For the one sadfishing, It becomes another game that relies on generating the maximum number of social responses. Those of us on the other side quickly learn how to play the game. We minimize our personal commitment and shield ourselves against false drama.

The really sad thing about all of this is that social media has managed to turn legitimate cries for help into just more noise we have to filter through.

But What If It’s Real?

Sadfishing aside, for some people Facebook might be all they have in the way of a social lifeline. And in this case, we mustn’t throw the baby out with the bathwater. If someone you know and care about has posted what you suspect is a genuine plea for help, respond as humans should: Reach out in the most personal way possible. Elevate the conversation beyond the bounds of social media by picking up the phone or visiting them in person. Create a person-to-person connection and be there for them.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

A Troubling Prognostication

It’s that time of year again. My inbox is jammed with pitches from PR flacks trying to get some editorial love for their clients. In all my years of writing, I think I have actually taken the bait maybe once or twice. That is an extremely low success rate. So much for targeting.

In early January, many of the pitches offer either reviews of 2019 or predictions for 2020.  I was just about to hit the delete button on one such pitch when something jumped out at me: “The number-one marketing trend for 2020 will be CDPs: customer data platforms.”

I wasn’t surprised by that. It makes sense. I know there’s a truckload of personal data being collected from everyone and their dog. Marketers love platforms. Why wouldn’t these two things come together?

But then I thought more about it — and immediately had an anxiety attack. This is not a good thing. In fact, this is a catastrophically terrible thing. It’s right up there with climate change and populist politics as the biggest world threats that keep me up at night.

To close out 2019,  fellow Insider Maarten Albarda gave you a great guide on where not to spend your money. In that column, he said this: “Remember when connected TVs, Google Glass and the Amazon Fire Phone were going to provide break-through platforms that would force mass marketing out of the box, and into the promised land of end-to-end, personalized one-on-one marketing?”

Ah, marketing nirvana: the Promised Land! The Holy Grail of personalized marketing. A perfect, friction-free direct connection between the marketer and the consumer.

Maarten went on to say that social media is one of the channels you shouldn’t be throwing money into, saying, “It’s also true that we have yet to see a compelling case where social media played a significant role in the establishment or continued success of a brand or service.”

I’m not sure I agree with this, though I admit I don’t have the empirical data to back up my opinion. But I do have another, darker reason why we should shut off the taps providing the flow of revenue to the usual social suspects. Social media based on an advertising revenue model is a cancerous growth — and we have to shut off its blood flow.

Personalized one-to-one marketing — that Promised Land —  cannot exist without a consistent and premeditated attack on our privacy. It comes at a price we should not be prepared to pay.

It depends on us trusting profit-driven corporations that have proven again and again that they shouldn’t be trusted. It is fueled by our darkest and least admirable motives.

The ecosystem that is required to enable one-to-one marketing is a cesspool of abuse and greed. In a pristine world of marketing with players who sport shiny ideals and rock-solid ethics, maybe it would be okay. Maybe. Personally, I wouldn’t take that bet. But in the world we actually live and work in, it’s a sure recipe for disaster.

To see just how subversive data-driven marketing can get, read “Mindf*ck” by Christopher Wylie. If that name sounds vaguely familiar to you, let me jog your memory. Wylie is the whistleblower who first exposed the Cambridge Analytica scandal. An openly gay, liberal, pink-haired Canadian, he seems an unlikely candidate to be the architect of the data-driven “Mindf*ck” machine that drove Trump into office and the Brexit vote over the 50% threshold.

Wylie admits to being blinded by the tantalizing possibilities of what he was working on at Cambridge Analytica: “Every day, I overlooked, ignored, or explained away warning signs. With so much intellectual freedom, and with scholars from the world’s leading universities telling me we were on the cusp of ‘revolutionizing’ social science, I had gotten greedy, ignoring the dark side of what we were doing.”

But Wylie is more than a whistleblower. He’s a surprisingly adept writer who has a firm grasp on not just the technical aspects, but also the psychology behind the weaponization of data. If venture capitalist Roger McNamee’s tell-all expose of Facebook, “Zucked,”  kept you up at night, “Mindf*ck” will give you screaming night terrors.

I usually hold off jumping on the year-end prognostication bandwagon, because I’ve always felt it’s a mug’s game. I would like to think that 2020 will be the year when the world becomes “woke” to the threat of profit-driven data abuse — but based on our collective track record of ignoring inconvenient truths, I’m not holding my breath.

Just in Time for Christmas: More Search Eye-Tracking

The good folks over at the Nielsen Norman Group have released a new search eye tracking report. The findings are quite similar to one my former company — Mediative — did a number of years ago (this link goes to a write-up about the study. Unfortunately, the link to the original study is broken. *Insert head smack here).

In the Nielsen Norman study, the two authors — Kate Moran and Cami Goray — looked at how a more visually rich and complex search results page would impact user interaction with the page. The authors of the report called the sum of participant interactions a “Pinball Pattern”: “Today, we find that people’s attention is distributed on the page and that they process results more nonlinearly than before. We observed so much bouncing between various elements across the page that we can safely define a new SERP-processing gaze pattern — the pinball pattern.”

While I covered this at some length when the original Mediative report came out in 2014 (in three separate columns: 1,2 & 3), there are some themes that bear repeating. Unfortunately, I found the study’s authors missed what I think are some of the more interesting implications. 

In the days of the “10 Blue Links” search results page, we used the same scanning strategy no matter what our intent was. In an environment where the format never changes, you can afford to rely on a stable and consistent strategy. 

In our first eye tracking study, published in 2004, this consistent strategy led to something we called the Golden Triangle. But those days are over.

Today, when every search result can look a little bit different, it comes as no surprise that every search “gaze plot” (the path the eyes take through the results page) will also be different. Let’s take a closer look at the reasons for this. 

SERP Eye Candy

In the Nielsen Norman study, the authors felt “visual weighting” was the main factor in creating the “Pinball Pattern”: “The visual weight of elements on the page drives people’s scanning patterns. Because these elements are distributed all over the page and because some SERPs have more such elements than others, people’s gaze patterns are not linear. The presence and position of visually compelling elements often affect the visibility of the organic results near them.”

While the visual impact of the page elements is certainly a factor, I think it’s only part of the answer. I believe a bigger, and more interesting, factor is how the searcher’s brain and its searching strategies have evolved in lockstep with a more visually complex results page. 

The Importance of Understanding Intent

The reason why we see so much variation in scan patterns is that there is also extensive variation in searchers’ intent. The exact same search query could be used by someone intent on finding an online or physical place to purchase a product, comparing prices on that product, looking to learn more about the technical specs of that product, looking for how-to videos on the use of the product, or looking for consumer reviews on that product.

It’s the same search, but with many different intents. And each of those intents will result in a different scanning pattern. 

Predetermined Page Visualizations

I really don’t believe we start each search page interaction with a blank slate, passively letting our eyes be dragged to the brightest, shiniest object on the page. I think that when we launch the search, our intent has already created an imagined template for the page we expect to see. 

We have all used search enough to be fairly accurate at predicting what the page elements might be: thumbnails of videos or images, a map showing relevant local results, perhaps a Knowledge Graph result in the lefthand column. 

Yes, the visual weighting of elements act as an anchor to draw the eye, but I believe the eye is using this anticipated template to efficiently parse the results page. 

I have previously referred to this behavior as a “chunking” of the results page. And we already have an idea of what the most promising chunks will be when we launch the search. 

It’s this chunking strategy that’s driving the “pinball” behavior in the Nielsen Norman study.  In the Mediative study, it was somewhat surprising to see that users were clicking on a result in about half the time it took in our original 2005 study. We cover more search territory, but thanks to chunking, we do it much more efficiently.

One Last Time: Learn Information Scent

Finally, let me drag out a soapbox I haven’t used for a while. If you really want to understand search interactions, take the time to learn about Information Scent and how our brains follow it (Information Foraging Theory — Pirolli and Card, 1999 — the link to the original study is also broken. *Insert second head smack, this one harder.). 

This is one area where the Nielsen Norman Group and I are totally aligned. In 2003, Jakob Nielsen — the first N in NNG — called the theory “the most important concept to emerge from human-computer interaction research since 1993.”

On that we can agree.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

The Hidden Agenda Behind Zuckerberg’s “Meaningful Interactions”

It probably started with a good intention. Facebook – aka Mark Zuckerberg – wanted to encourage more “Meaningful Interactions”. And so, early last year, Facebook engineers started making some significant changes to the algorithm that determined what you saw in your News Feed. Here are some excerpts from Zuck’s post to that effect:

“The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos — even if they’re entertaining or informative — may not be as good.”

That makes sense, right? It sounds logical. Zuckerberg went on to say how they were changing Facebook’s algorithm to encourage more “Meaningful Interactions.”

“The first changes you’ll see will be in News Feed, where you can expect to see more from your friends, family and groups.

As we roll this out, you’ll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard — it should encourage meaningful interactions between people.”


Let’s fast-forward almost two years and we now see the outcome of that good intention…an ideological landscape with a huge chasm where the middle ground used to be.

The problem is that Facebook’s algorithm naturally favors content from like-minded people. And surprisingly, it doesn’t take a very high degree of ideological homogeneity to create a highly polarized landscape. This shouldn’t have come as a surprise. American Economist Thomas Schelling showed us how easy it was for segregation to happen almost 50 years ago.

The Schelling Model of Segregation was created to demonstrate why racial segregation was such a chronic problem in the U.S., even given repeated efforts to desegregate. The model showed that even when we’re pretty open minded about who our neighbors are, we will still tend to self-segregate over time.

The model works like this. A grid represents a population with two different types of agents: X and O. The square that the agent is in represents where they live. If the agent is satisfied, they will stay put. If they aren’t satisfied, they will move to a new location. The variable here is the level of satisfaction determined by what percentage of their immediate neighbours are the same type of agent as they are. For example, the level of satisfaction might be set at 50%; where the X agent needs at least 50% of its neighbours to also be of type X. (If you want to try the model firsthand, Frank McCown, a Computer Science professor at Harding University, created an online version.)

The most surprising thing that comes out of the model is that this threshold of satisfaction doesn’t have to be set very high at all for extensive segregation to happen over time. You start to see significant “clumping” of agent types at percentages as low as 25%. At 40% and higher, you see sharp divides between the X and O communities. Remember, even at 40%, that means that Agent X only wants 40% of their neighbours to also be of the X persuasion. They’re okay being surrounded by up to 60% Os. That is much more open-minded than most human agents I know.

Now, let’s move the Schelling Model to Facebook. We know from the model that even pretty open-minded people will physically segregate themselves over time. The difference is that on Facebook, they don’t move to a new part of the grid, they just hit the “unfollow” button. And the segregation isn’t physical – it’s ideological.

This natural behavior is then accelerated by the Facebook “Meaningful Encounter” Algorithm which filters on the basis of people you have connected with, setting in motion an ever-tightening spiral that eventually restricts your feed to a very narrow ideological horizon. The resulting cluster then becomes a segment used for ad targeting. We can quickly see how Facebook both intentionally built these very homogenous clusters by changing their algorithm and then profits from them by providing advertisers the tools to micro target them.

Finally, after doing all this, Facebook absolves themselves of any responsibility to ensure subversive and blatantly false messaging isn’t delivered to these ideologically vulnerable clusters. It’s no wonder comedian Sascha Baron Cohen just took Zuck to task, saying “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’”. 

In rereading Mark Zuckerberg’s post from two years ago, you can’t help but start reading between the lines. First of all, there is mounting evidence that disproves his contention that meaningful social media encounters help your well-being. It appears that quitting Facebook entirely is much better for you.

And secondly, I suspect that – just like his defence of running false and malicious advertising by citing free speech – Zuck has an not-so-hidden agenda here. I’m sure Zuckerberg and his Facebook engineers weren’t oblivious to the fact that their changes to the algorithm would result in nicely segmented psychographic clusters that would be like catnip to advertisers – especially political advertisers. They were consolidating exactly the same vulnerabilities that were exploited by Cambridge Analytica.

They were building a platform that was perfectly suited to subvert democracy.