A Lesson Learned from the Lost Generation

“I wasn’t around for Y2K. Was it like this?”

The question was posed to me by a young man named Jeremy – about 18 or 19 – who brought the online order of groceries to my car. He had just been telling me how store employees had been scrambling to stay ahead of items people were starting to hoard so they could post limits to prevent the shelves being stripped bare. On this day, it was bread. He shook his head, unable to wrap it around people’s panic. He was trying to relate it to something that could serve as a baseline.

My initial reaction to his question was to laugh. Y2K was a nothingburger. We panicked, we nervously rang in the New Year of 2000, and then we laughed sheepishly and went on with life.

This is different. On so many levels. I told that to Jeremy. “I have been around for almost 60 years. I have never experienced anything like this before.”

I’ve been thinking about that conversation a lot since. In Jeremy’s short time here on the planet, he probably has never experienced true hardship. But then, neither have I. Not really. Not like what we’re about to experience.

If you were to plot a trendline of my life over the last 6 decades, it would overwhelmingly be up and to the right. Sure, there were blips. But it’s been a pretty damned good 60 years. For me, hardship has been defined by putting off a trip because I couldn’t afford it. Or buying a used car when I wanted a new one. Poor me.

In the writing of this, I tried to find some formula to put magnitude of significance to events like this. I couldn’t find one, so I made my own:

Personal Impact X Number of People Impacted X Duration of Impact

The Pew Research Center asked Americans to rank the most significant events of their lifetimes in 2016. If we just look at those events that were negative, they were 9/11 (by a significant margin), the JFK assassination and the Vietnam War.

But now let’s attempt to quantify the magnitude of significance. When I say personal impact, I’m not talking about emotional impact. I’m talking about material impacts on my life that are directly attributable to the event.

My heart broke on 9/11, just like all of yours. That day would change my perspective on many things. But in real terms, it didn’t shift my life in any significant ways. There was tightened security when I travelled, but that was about it. This in no way minimizes the tragedy of the event. I know it was excruciatingly real for some of you reading this. I’m just putting it in perspective for myself.

This will be different. There is a shit-ton of uncertainty about what lies ahead, but I’m pretty sure all our lives are going to change significantly for the next 18 months to 2 years at least.  And it will impact everyone in the world. The vast majority of us have never been through anything like this before.  But others have. In fact, a whole generation has. Unfortunately, none of them are around to talk to. They were called the Lost Generation.

My Grandfather was part of this generation. Officially, those belonging to the Lost Generation were born between 1883 and 1900. Charles Edward Hotchkiss was born in Herefordshire, England in 1888. He died in Ontario, Canada in 1955, at the age of 67. He was just 8 years older than I am today. Given what we’re going through currently, I stopped to think about what “Charlie” experienced in his lifetime.

In 1910, he boarded the SS Lake Champlain in Liverpool and came to Canada. He was 21. Four years later, he volunteered for service in World War I. In the next 4 years, 9 million soldiers would die, 21 million would be wounded (my grandfather was one of them), 7 million would be left permanently disabled. 10 million civilians also died.

Those numbers are staggering, but an even deadlier and more significant event was just getting started in 1917. Today, we remember it as the Spanish Flu but that is a misnomer. It was called that because early reporting of the severity of the influenza pandemic was censored in wartime Europe for fear that troops would panic and desert. The only country where reporting was somewhat accurate was in neutral Spain, which led to the mistaken notion that the impact was worse there than anywhere else. By the time the epidemic subsided in 1920, somewhere between 50 and 100 million people would die. It had infected 500 million people, a quarter of the world’s population.

This was the reality newly discharged Charlie came back to when he stepped off the boat in Halifax on September 14, 1919. He was 31.

Charlie married my grandmother, Rose, in 1926. Three years later, the world slipped into the Great Depression. Half of all banks in the US failed. Unemployment spiked to 25%. International trade collapsed by 65%. Millions became homeless migrants. And it would continue like this for the next 10 years. In the middle of all this – in 1935 – Charlie and Rose had a baby. It was my father, William Francis Hotchkiss.

When my dad was just 4, World War II started. My grandfather, who was 50, was too old to actively serve but the impact of the war was still immense on Rose, William and Charlie. Over the next 6 years, 100 million people would be directly impacted from more than 30 countries. It is estimated 20 million military personnel and 40 million civilians would die in those 6 years.

These events, any one of which would be staggering to us, were packed into just 3 decades. I tried to imagine myself going through that from 1990 to today. I couldn’t.

Sometimes, when you can’t see forward, it’s helpful to look back. When I did that, I realized we’re a pretty resilient species. The Lost Generation laid the foundation for the world we live in today. They weathered storm after storm after storm. They made it through. They raised families, started businesses and survived.

It will get hard for us. Really hard. It’s a definition of hardship many of us will be dealing with for the first time in our lives. But we go into this with technological and societal advantages the Lost Generation never had or could even dream of. We should be able to do this without falling apart.

We come from good stock.

Whipped Into a Frenzy

Once again, we’re in unprecedented territory. According to the CDC – COVID-19 is the first global pandemic since the 2009 H1N1 outbreak. While Facebook was around in 2009, it certainly wasn’t as pervasive or impactful as it is today. Neither – for that matter – was H1N1 when compared to COVID-19. That would make COVID-19 the first true pandemic in the age of social media.

While we’re tallying the rapidly mounting human and economic costs of the pandemic on a day-by-day basis, there is a third type of damage to consider. There will be a cognitive cost to this as well.

So let’s begin by unpacking the psychology of a pandemic. Then we’ll add the social media lens to that.

Emotional Contagion aka “The Toilet Paper Syndrome”

Do you have toilet paper at your local store? Me neither. Why?

The short answer is that there is no rational answer. There is no disruption in the supply chain of toilet paper. If you were inclined to stock up on something to battle COVID-19, hand sanitizer would be a much better choice.  Search as you might, there is no logical reason why people should be pulling toilet paper by the pallet full out of their local Costco.

There is really only one explanation; panic is contagious. It’s called emotional contagion. And there is an evolutionary explanation for it. We evolved as herd animals and when our threats came from the environment around us, it made sense to panic when you saw your neighbor panicking. Those that were on the flanks of the herd acted as an early warning system for the rest. When you saw panic close to you, the odds were very good that you were about to be eaten, trampled or buried under a rockslide. We’re hardwired to live by the principle of “Monkey see, monkey do.”

Here’s the other thing about emotional contagion. It doesn’t work very well if you have to take time to think about it. Panicked responses to threats from your environment will only save your life if they happen instantly. Natural selection has ensured they bypass the slower and more rational processing loops of our brain.

But now let’s apply the social media lens to this. Before modern communication tools were invented, emotional contagion was limited by the constraints of physical proximity. It was the original application of social distancing. Emotions could spread to a social node linked by physical proximity, but it would seldom jump across ties to another node that was separated by distance.

Then came Facebook, a platform perfectly suited to emotional contagion. Through it, emotionally charged messages can spread like wildfire regardless of where the recipients might be – creating cascades of panic across all nodes in a social network.

Now we have cascades of panic causing – by definition – irrational responses. And that’s dangerous. As Wharton Management professor Sigal Barsade said in a recent podcast, “I would argue that emotional contagion, unless we get a hold on it, is going to greatly amplify the damage caused by COVID-19”

Why We Need to Keep Calm and Carry On

Keep Calm and Carry On – the famous slogan from World War II Britain – is more than just a platitude that looks good on a t-shirt. It’s a sound psychological strategy for survival, especially when faced with threats in a complex environment. We need to think with our whole brain and we can only do that when we’re not panicking.

Again, Dr. Barsade cautions us “One of the things we also know from the research literature is that negative emotions, particularly fear and anxiety, cause us to become very rigid in our decision-making. We’re not creative. We’re not as analytical, so we actually make worse decisions.”

Let’s again consider the Facebook Factor (in this case, Facebook being my proxy for all social media). Negative emotional messages driven by fear gets clicked and shared a lot on social media. Unfortunately, much of that messaging is – at best – factually incomplete or – at worst – a complete fabrication. A 2018 study from MIT showed that false news spreads six times faster on social media than factual information.

It gets worse. According to Pew Research, one in five Americans said that social media is their preferred source for news, surpassing newspapers. In those 18 -to 29, it was the number one source. When you consider the inherent flaws in the methodology of a voluntary questionnaire, you can bet the actual number is a lot higher.

Who Can You Trust?

Let’s assume we can stay calm. Let’s further assume we can remain rational. In order to make rational decisions, you need factual information.

Before 2016, you could generally rely on government sources to provide trustworthy information. But that was then. Now, we live in the reality distortion field that daily spews forth fabricated fiction from the Twitter account of Donald. J. Trump, aka the President of the United States.

The intentional manipulation of the truth by those we should trust has a crippling effect on our ability to respond as a cohesive and committed community. As recently as just a week and a half ago, a poll found that Democrats were twice as likely as Republicans to say that COVID-19 posed an imminent threat to the U.S. By logical extension, that means that Republicans were half as likely to do something to stop the spread of the disease.

My Plan for the Pandemic

Obviously, we live in a world of social media. COVID-19 or not, there is no going back. And while I have no idea what will happen regarding the pandemic, I do have a pretty good guess how this will play out on social media. Our behaviours will be amplified through social media and there will be a bell curve of those behaviors stretching from assholes to angels. We will see the best of ourselves – and the worst – magnified through the social media lens.

Given that, here’s what I’m planning to do. One I already mentioned. I’m going to keep calm. I’m going to do my damnedest to make calm, rational decisions based on trusted information (i.e. not from social media or the President of the United States) to protect myself, my loved ones and anyone else I can.

The other plan? I’m going to reread everything from Nassam Nicholas Taleb. This is a good time for all of us to brush up on our understanding of robustness and antifragility.

What is the Moral Responsibility of a Platform?

The owner of the AirBnB home in Orinda, California suspected something was up. The woman who wanted to rent the house for Halloween night swore it wasn’t for a party. She said it was for a family reunion that had to relocate at the last minute because of the wildfire smoke coming from the Kincade fire, 85 miles north of Orinda. The owners reluctantly agreed to rent the home for one night.

Shortly after 9 pm, the neighbors called the owner, complaining of a party raging next door. The owners verified this through their doorbell camera. The police were sent. Over a 100 people who had responded to a post on social media were packed into the million-dollar home. At 10:45 pm, with no warning, things turned deadly. Gunshots were fired. Four men in their twenties were killed immediately. A 19-year-old female died the next day. Several others were injured.

Here is my question. Is AirBnB partly to blame for this?

This is a prickly question. And it’s one to extends to any one of the platforms that are highly disruptive. Technical disruption is a race against our need for order and predictability. When the status quo is upended, there is a progression towards a new civility that takes time, but technology is outstripping it. Platforms create new opportunities – for the best of us and the worst.

The simple fact is that technology always unleashes ethical ramifications – the more disruptive the technology, the more serious the ethical considerations. The other tricky bit is that some ethical considerations can be foreseen..but others cannot.

I have often said that our world is becoming a more complex place. Technology is multiplying this complexity at an ever increasing pace. And the more complex things are, the more difficult they are to predict.

As Homo Deus author Yuval Noah Harari said, because of the pace of technology, our world is becoming more complex, so it is becoming increasingly difficult to predict what the future might hold.

“Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently, we are less and less able to make sense of the present or forecast the future.”

This acceleration is also eliminating the gap between cause and consequence. We used to have the luxury of time to digest disruption. But now, the gap between the introduction of the technology and the ripples of the ramifications is shrinking.

Think about the ethical dilemmas and social implications introduced by the invention of the printing press. Thanks to the introduction of this technology, literacy started creeping down through social classes and it totally disrupted entire established hierarchies, unleashed ideological revolutions and ushered in tsunamis of social change. But the cause and consequences were separated by decades and even centuries. Should Guttenberg be held responsible for the French Revolution? This seems laughable, but only because almost three and a half centuries lie between the two.

Like the printing press eventually proved, technology typically dismantles vertical hierarchies. It democratizes capabilities – spreading them down to new users and – in the process – making the previously impossible possible. I have always said that technology is simply a tool, albeit an often disruptive one. It doesn’t change human behaviors. It enables them. But here we have an interesting phenomenon. If technology pushes capabilities down to more people and simultaneously frees those users from the restraint of a verticalized governing structure, you have a highly disruptive sociological experiment happening in real time with a vast sample of subjects.

Most things about human nature are governed by a normal distribution curve – also known as a bell curve. Behaviors expressed through new technologies are no exception. When you rapidly expand access to a capability you are going to have a spectrum of ethical attitudes interacting with it. At one end of the spectrum, you will have bad actors. You will find these actors on both sides of a market expanding at roughly the same rate as our universe. And those actors will do awful things with the technology.

Our innate sense of fairness seeks a simple line between cause and effect. If shootings happen at an AirBnB party house, then AirBnB should be held at least partly responsible. Right?

I’m not so sure. That’s the simple answer, but after giving it much thought, I don’t believe it’s the right one.  Like my previous example of the printing press, I think trying to saddle a new technology with the unintentional and unforseen social disruption unleashed by that technology is overly myopic. It’s an attitude that will halt technological progress in its tracks.

I fervently believe new technologies should be designed with humanitarian principles in mind. They should elevate humans, strive for neutrality, be impartial and foster independence. In the real world, they should do all this in a framework that allows for profitability. It is this, and only this, that is reasonable to ask from any new technology. To try to ask it to foresee every potential negative outcome or to retroactively hold it accountable when those outcomes do eventually occur is both unreasonable and unrealistic.

Disruptive technologies will always find the loopholes in our social fabric. They will make us aware of the vulnerabilities in our legislation and governance. If there is an answer to be found here, it is to be found in ourselves. We need to take accountability for the consequences of the technologies we adopt. We need to vote for governments that are committed to keeping pace with disruption through timely and effective governance.

Like it or not, the technology we have created and adopted has propelled us into a new era of complexity and unpredictability. We are flying into uncharted territory by the seat of our pants here. And before we rush to point fingers we should remember – we’re the ones that asked for it.

The Saddest Part about Sadfishing

There’s a certain kind of post I’ve always felt uncomfortable with when I see it on Facebook. You know the ones I’m talking about — where someone volunteers excruciatingly personal information about their failing relationships, their job dissatisfaction, their struggles with personal demons. These posts make me squirm.

Part of that feeling is that, being of British descent, I deal with emotions the same way the main character’s parents are dealt with in the first 15 minutes of any Disney movie: Dispose of them quickly, so we can get on with the business at hand.

I also suspect this ultra-personal sharing  is happening in the wrong forum. So today, I’m trying to put an empirical finger on my gut feelings of unease about this particular topic.

After a little research, I found there’s a name for this kind of sharing: sadfishing. According to Wikipedia, “Sadfishing is the act of making exaggerated claims about one’s emotional problems to generate sympathy. The name is a variation on ‘catfishing.’ Sadfishing is a common reaction for someone going through a hard time, or pretending to be going through a hard time.”

My cynicism towards these posts probably sounds unnecessarily harsh. It goes against our empathetic grain. These are people who are just calling out for help. And one of the biggest issues with mental illness is the social stigma attached to it. Isn’t having the courage to reach out for help through any channel available — even social media — a good thing?

I do believe asking for help is undeniably a good thing. I wish I myself was better able to do that. It’s Facebook I have the problem with. Actually, I have a few problems with it.

It’s Complicated

Problem #1: Even if a post is a genuine request for help, the poster may not get the type of response he or she needs.

Mental Illness, personal grief and major bumps on our life’s journey are all complicated problems — and social media is a horrible place to deal with complicated problems. It’s far too shallow to contain the breadth and depth of personal adversity.

Many read a gut-wrenching, soul-scorching post (genuine or not), then leave a heart or a sad face, and move on. Within the paper-thin social protocols of Facebook, this is an acceptable response. And it’s acceptable because we have no skin in the game. That brings us to problem #2.

Empathy is Wired to Work Face-to-Face

Our humanness works best in proximity. It’s the way we’re wired.

Let’s assume someone truly needs help. If you’re physically with them and you care about them, things are going to get real very quickly. It will be a connection that happens at all possible levels and through all senses.

This will require, at a minimum, hand-holding and, more likely, hugs, tears and a staggering personal commitment  to help this person. It is not something taken or given lightly. It can be life-changing on both sides.

You can’t do it at arm’s length. And you sure as hell can’t do it through a Facebook reply.

The Post That Cried Wolf

But the biggest issue I have is that social media takes a truly genuine and admirable instinct, the simple act of helping someone, and turns it into just another example of fake news.

Not every plea for help on Facebook is exaggerated just for the sake of gaining attention, but some of them are.

Again, Facebook tends to take the less admirable parts of our character and amplify them throughout our network. So, if you tend to be narcissistic, you’re more apt to sadfish. If you have someone you know who continually reaches out through Facebook with uncomfortably personal posts of their struggles, it may be a sign of a deeper personality disorder, as noted in this post on The Conversation.

This phenomenon can create a kind of social numbness that could mask genuine requests for help. For the one sadfishing, It becomes another game that relies on generating the maximum number of social responses. Those of us on the other side quickly learn how to play the game. We minimize our personal commitment and shield ourselves against false drama.

The really sad thing about all of this is that social media has managed to turn legitimate cries for help into just more noise we have to filter through.

But What If It’s Real?

Sadfishing aside, for some people Facebook might be all they have in the way of a social lifeline. And in this case, we mustn’t throw the baby out with the bathwater. If someone you know and care about has posted what you suspect is a genuine plea for help, respond as humans should: Reach out in the most personal way possible. Elevate the conversation beyond the bounds of social media by picking up the phone or visiting them in person. Create a person-to-person connection and be there for them.

The Ruts of Our Brain

We are not – by nature – open minded. In fact, as we learn something, the learning creates neural pathways in our brain that we tend to stick to. In other words, the more we learn, the bigger the ruts get.

Our brains are this way by design. At its core, the brain is an energy saving device. If there are two options open to it, one requiring more cognitive processing and one requiring less, the brain will default to the less resource intensive option.

This puts expertise into an interesting new perspective. In a recent study, researchers from Cold Spring Harbor Laboratory, Columbia University, University College London and Flatiron Institute found that when mice learn a new task, the neurons in their brain actually change as they move from being a novice to an expert. At the beginning as they’re learning the task, the required neurons don’t “fire” until the brain makes a decision. But, as expertise is gained, those same neurons start responding before they’re even needed. It’s essentially Hebbian Theory (named after neurologist Donald Hebbs) in action: the neurons that fire together eventually wire together.

We tend to think of experts as bringing a well-honed subset of intellectual knowledge to a question. And that is true, as long as the question is well within their area of expertise. But the minute an expert ventures outside of their “rut” they begin to flounder. In fact, even when they are in their area of expertise but are asked to predict where that path that may lead in the future – beyond their current rut – their expertise doesn’t help them. In 2005 psychologist Phillip Tetlock published “Expert Political Judgement” – a book showing the results of a 20-year long study on the prediction track record of experts. It wasn’t good. According to a New Yorker review of the book, “Human beings who spend their lives studying the state of the world…are poorer forecasters than dart-throwing monkeys”

Why? Well, just like those mice in the above-mentioned study, once we have a rut, our brains like to stick to the rut. It’s just easier for us. And experts have very deep ruts. The deeper the rut, the more effort it takes to peer above it. As Tetlock found, when it comes to predicting what might happen in some area in the future, even if you happen to be an expert in that area, you’d probably be better off flipping a coin than relying on your brain.

By the way, for most of human history, this has been a feature, not a bug. Saving cognitive energy is a wonderful evolutionary advantage. If you keep doing the same thing over and over, eventually the brain pre-lights the neuronal path required, saving itself time and energy. The brain is directing anticipated traffic at faster than the speed of thought. And it’s doing it so well, it would take a significant amount of cognitive horsepower to derail this action.

Like I said, in a fairly predictably world of cause and effect, this system works. But in an uncertain world full of wild card complexity, it can be crippling.

Complex worlds require Foxes, not Hedgehogs. This analogy also comes from Tetlock’s book. According to an old Greek fable, “The fox knows many things but the hedgehog knows just one thing.” To that I would add; the fox knows a little about many things, but the hedgehog knows a lot about one thing. In other words, the hedgehog is an expert.

In Tetlock’s study, people with “fox” qualities had a significantly better track record then “hedgehogs” when it came to predicting the future. Their brains were better able to take the time to synthesize the various data inputs required to deal with the complexity of crystal balling the future because they weren’t barrelling down a pre-ordained path that had been carved by years of accumulated expertise.

But it’s not just expertise that creates these ruts in our brains. The same pattern plays out when we look at the impact of our beliefs play in how open-minded we are. The stronger the belief, the deeper the rut.

Again, we have to remember that this tendency of our brains to form well-travelled grooves over time has been crafted by the blind watchmaker of evolution. But that doesn’t make it any less troubling when we think about the limitations it imposes in a more complex world. This is especially true when new technologies deliberately leverage our vulnerability in this area. Digital platforms ruthlessly eliminate the real estate that lies between perspectives. The ideological landscape in which foxes can effectively operate is disappearing. Increasingly we grasp for expertise – whether it’s on the right or left of any particular topic – with the goal of preserving our own mental ruts.

And as the ruts get deeper, foxes are becoming an endangered species.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

The Hidden Agenda Behind Zuckerberg’s “Meaningful Interactions”

It probably started with a good intention. Facebook – aka Mark Zuckerberg – wanted to encourage more “Meaningful Interactions”. And so, early last year, Facebook engineers started making some significant changes to the algorithm that determined what you saw in your News Feed. Here are some excerpts from Zuck’s post to that effect:

“The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos — even if they’re entertaining or informative — may not be as good.”

That makes sense, right? It sounds logical. Zuckerberg went on to say how they were changing Facebook’s algorithm to encourage more “Meaningful Interactions.”

“The first changes you’ll see will be in News Feed, where you can expect to see more from your friends, family and groups.

As we roll this out, you’ll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard — it should encourage meaningful interactions between people.”


Let’s fast-forward almost two years and we now see the outcome of that good intention…an ideological landscape with a huge chasm where the middle ground used to be.

The problem is that Facebook’s algorithm naturally favors content from like-minded people. And surprisingly, it doesn’t take a very high degree of ideological homogeneity to create a highly polarized landscape. This shouldn’t have come as a surprise. American Economist Thomas Schelling showed us how easy it was for segregation to happen almost 50 years ago.

The Schelling Model of Segregation was created to demonstrate why racial segregation was such a chronic problem in the U.S., even given repeated efforts to desegregate. The model showed that even when we’re pretty open minded about who our neighbors are, we will still tend to self-segregate over time.

The model works like this. A grid represents a population with two different types of agents: X and O. The square that the agent is in represents where they live. If the agent is satisfied, they will stay put. If they aren’t satisfied, they will move to a new location. The variable here is the level of satisfaction determined by what percentage of their immediate neighbours are the same type of agent as they are. For example, the level of satisfaction might be set at 50%; where the X agent needs at least 50% of its neighbours to also be of type X. (If you want to try the model firsthand, Frank McCown, a Computer Science professor at Harding University, created an online version.)

The most surprising thing that comes out of the model is that this threshold of satisfaction doesn’t have to be set very high at all for extensive segregation to happen over time. You start to see significant “clumping” of agent types at percentages as low as 25%. At 40% and higher, you see sharp divides between the X and O communities. Remember, even at 40%, that means that Agent X only wants 40% of their neighbours to also be of the X persuasion. They’re okay being surrounded by up to 60% Os. That is much more open-minded than most human agents I know.

Now, let’s move the Schelling Model to Facebook. We know from the model that even pretty open-minded people will physically segregate themselves over time. The difference is that on Facebook, they don’t move to a new part of the grid, they just hit the “unfollow” button. And the segregation isn’t physical – it’s ideological.

This natural behavior is then accelerated by the Facebook “Meaningful Encounter” Algorithm which filters on the basis of people you have connected with, setting in motion an ever-tightening spiral that eventually restricts your feed to a very narrow ideological horizon. The resulting cluster then becomes a segment used for ad targeting. We can quickly see how Facebook both intentionally built these very homogenous clusters by changing their algorithm and then profits from them by providing advertisers the tools to micro target them.

Finally, after doing all this, Facebook absolves themselves of any responsibility to ensure subversive and blatantly false messaging isn’t delivered to these ideologically vulnerable clusters. It’s no wonder comedian Sascha Baron Cohen just took Zuck to task, saying “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’”. 

In rereading Mark Zuckerberg’s post from two years ago, you can’t help but start reading between the lines. First of all, there is mounting evidence that disproves his contention that meaningful social media encounters help your well-being. It appears that quitting Facebook entirely is much better for you.

And secondly, I suspect that – just like his defence of running false and malicious advertising by citing free speech – Zuck has an not-so-hidden agenda here. I’m sure Zuckerberg and his Facebook engineers weren’t oblivious to the fact that their changes to the algorithm would result in nicely segmented psychographic clusters that would be like catnip to advertisers – especially political advertisers. They were consolidating exactly the same vulnerabilities that were exploited by Cambridge Analytica.

They were building a platform that was perfectly suited to subvert democracy.

Running on Empty: Getting Crushed by the Crush It Culture

“Nobody ever changed the world on 40 hours a week.”

Elon Musk

Those damned Protestants and their work ethic. Thanks to them, unless you’re willing to put in a zillion hours a week, you’re just a speed bump on the road to all that is good in the world. Take Mr. Musk, for example. If you happen to work at Tesla, or SpaceX, or the Boring Company, Elon has figured out what your average work week should be, “(It) Varies per person, but about 80 sustained, peaking above 100 at times. Pain level increases exponentially above 80.”

“Pain level increases exponentially above 8o”? WTF, Mr. Musk!

But he’s not alone. Google famously built their Mountainview campus so employees never had to go home. Alibaba Group founder Jack Ma calls the intense work culture at his company a “huge blessing.” He calls it the “996” work schedule, 9 am to 9 pm 6 days a week. That’s 72 hours, if you’re counting. But even that wouldn’t cut it if you work for Elon Musk. You’d be a dead beat.

This is the “Crush It” culture, where long hours equate to dedication and – by extension – success. No pain, no gain.

We spend lots of time talking about the gain – so let me spend just one column talking about the pain. Pain such as mental illness, severe depression, long term disabilities and strokes. Those that overwork are more likely to over-eat, smoke, drink excessively and develop other self-destructive habits.

You’re not changing the world. You’re shortening your life. The Japanese call it karoshi; death by overwork.

Like so many things, this is another unintended consequence of a digitally mediated culture. Digital speeds everything up. But our bodies – and brains – aren’t digital. They burn out if they move too fast – or too long.

Overwork as a sign of superior personal value is a fairly new concept in the span of human history. It came from the Puritans who settled in New England. They believed that those that worked hard at their professions were those chosen to get into heaven. The more wealth you amassed from your work, the more evidence there was that you were one of the chosen.

Lately, the creeping Capitalist culture of over-working has most firmly embedded itself in the tech industry. There, the number of hours you work has become a proxy of your own worth. A twisted type of machismo has evolved and has trapped us all into thinking that an hour not spent at our jobs is an hour wasted. We are looked down upon for wanting some type of balance in our lives.

Unfortunately for the Musks and Mas and other modern-day task masters – the biology just doesn’t support their proposed work schedules.

First, our brains need rest. Back in the 18th century when those Puritans proved their worth through work, earning a living was usually a physical endeavour. The load of overwork was spread amongst the fairly simple mechanical machinery of our own bodies. Muscles got sore. Joints ached. But they recovered.

The brain is a much more complex beast. When it gets overworked, it loses its executive ability to focus on the task at hand. When your work takes place on a desktop or laptop where there are unlimited diversions just a click away, you suddenly find yourself 45 minutes into an unplanned YouTube marathon or scrolling through your Facebook feed. It becomes a downward spiral that benefits no one.

An overworked mind also loses its ability to spin down in the evening so you can get an adequate amount of sleep. When your co-workers start boasting of being able to function on just 3 or 4 hours of sleep – they are lying. They are lying to you, but worse, they are lying to themselves. Very few of us can function adequately on less than 7 or 8 hours of sleep. For the rest of us, the negative effects start to accumulate. A study found that sleep deprivation has the same impact as drinking too much. Those that were getting less than 7 hours of sleep faired the same or worse on a cognitive test as those that had a 0.05% blood alcohol level. The legal limit in most states is 0.08%.

Finally, in an essay on Medium, Rachel Thomas points out that the Crush It Culture is discriminatory. Those that have a disability or chronic illness simply have fewer hours in the day to devote to work. They need time for medical support and usually require more sleep. In an industry like Tech where there is an unhealthy focus on the number of hours worked, these workers – which Thomas says makes up at least 30% of the total workforce – are shut out.

The Crush It Culture is toxic. The science simply doesn’t support it. The only ones evangelizing it are those that directly benefit from this modernized version of feudalism.  It’s time to call Bullshit on them.

This Election, Canucks were “Zucked”

Note: I originally wrote this before results were available. Today, we know Trudeau’s Liberals won a minority government, but the Conservatives actually won the popular vote: 34.4% vs 33.06% for the Liberals. It was a very close election.

As I write this, Canadians are going to the polls in our national election. When you read this, the outcome will have been decided. I won’t predict — because this one is going to be too close to call.

For a nation that is often satirized for our tendencies to be nice and polite, this has been a very nasty campaign. So nasty, in fact, that in focusing on scandals and personal attacks, it forgot to mention the issues.

Most of us are going to the polls today without an inkling of who stands for what. We’re basically voting for the candidate we hate the least. In other words, we’re using the same decision strategy we used to pick the last guest at our grade 6 birthday party.

The devolvement of democracy has now hit the Great White North, thanks to Facebook and Mark Zuckerberg.

While the amount of viral vitriol I have seen here is still a pale shadow of what I saw from south of the 49th in 2016, it’s still jarring to witness. Canucks have been “Zucked.” We’re so busy slinging mud that we’ve forgotten to care about the things that are essential to our well being as a nation.

It should come as news to no one that Facebook has been wantonly trampling the tenets of democracy. Elizabeth Warren recently ran a fake ad on Facebook just to show she could. Then Mark Zuckerberg defended Facebook last week when he said: “While I certainly worry about an erosion of truth, I worry about living in a world where you can only post things that tech companies decide to be 100 per cent true.”

Zuckerberg believes the onus lies with the Facebook user to be able to judge what is false and what is not. This is a suspiciously convenient defense of Facebook’s revenue model wrapped up as a defense of freedom of speech. At best it’s naïve, not to mention hypocritical. What we see is determined by Facebook’s algorithm. At worst it’s misleading and malicious.

Hitting hot buttons tied to emotions is nothing new in politics. Campaign runners have been drawing out and sharpening the long knives for decades now. TV ads added a particularly effective weapon into the political arsenal. In the 1964 presidential campaign, it even went nuclear with Lyndon Johnson’s famous “Daisy” Ad.

But this is different. For many reasons.

First of all, there is the question of trust in the channel. We have been raised in a world where media channels historically take some responsibility to delineate between what they say is factual (i.e., the news) and what is paid persuasion (i.e., the ads).

In his statement, Zuckerberg is essentially telling us that giving us some baseline of trust in political advertising is not Facebook’s job and not their problem. We should know better.

But we don’t. It’s a remarkably condescending and convenient excuse for Zuckerberg to appear to be telling us “You should be smarter than this” when he knows that this messaging has little to do with our intellectual horsepower.

This is messaging that is painstakingly designed to be mentally processed before the rational part of our brain even kicks in.

In a recent survey, three out of four Canadians said they had trouble telling which social media accounts were fake. And 40% of Canadians say they had found links to stories on current affairs that were obviously false. Those were only the links they knew were fake. I assume that many more snuck through their factual filters. By the way, people of my generation are the worst at sniffing out fake news.

We’ve all seen it, but only one third of Canadians 55 and over realize it. We can’t all be stupid.

Because social media runs on open platforms, with very few checks and balances, it’s wide open for abuse. Fake accounts, bots, hacks and other digital detritus litter the online landscape. There has been little effective policing of this. The issue is that cracking down on this directly impacts the bottom line. As Upton Sinclair said: “It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

Even given these two gaping vulnerabilities, the biggest shift when we think of social media as an ad platform is that it is built on the complexity of a network. The things that come with this — things like virality, filter bubbles, threshold effects — have no corresponding rule book to play by. It’s like playing poker with a deck full of wild cards.

Now — let’s talk about targeting.

When you take all of the above and then factor in the data-driven targeting that is now possible, you light the fuse on the bomb nestled beneath our democratic platforms. You can now segment out the most vulnerable, gullible, volatile sectors of the electorate. You can feed them misinformation and prod them to action. You can then sit back and watch as the network effects play themselves out. Fan — meet shit. Shit — meet fan.

It is this that Facebook has wrought, and then Mark Zuckerberg feeds us some holier-than-thou line about freedom of speech.

Mark, I worry about living in a world where false — and malicious — information can be widely disseminated because a tech company makes a profit from it.

The Internet: Nasty, Brutish And Short

When the internet ushered in an explosion of information in the mid to late 90s there were many — I among them — who believed humans would get smarter. What we didn’t realize then is that the opposite would eventually prove to be true.

The internet lures us into thinking with half a brain. Actually, with less than half a brain – and the half we’re using is the least thoughtful, most savage half. The culprit is the speed of connection and reaction. We are now living in a pinball culture, where the speed of play determines that we have to react by instinct. There is no time left for thoughtfulness.

Daniel Kahneman’s monumental book, “Thinking, Fast and Slow,” lays out the two loops we use for mental processing. There’s the fast loop, our instinctive response to situations, and there’s the slow loop, our thoughtful processing of reality.

Humans need both loops. This is especially true in the complexity of today’s world. The more complex our reality, the more we need the time to absorb and think about it.

 If we could only think fast, we’d all believe in capital punishment, extreme retribution and eye-for-eye retaliation. We would be disgusted and pissed off almost all the time. We would live in the Hobbesian State of Nature (from English philosopher Thomas Hobbes): The “natural condition of mankind” is what would exist if there were no government, no civilization, no laws, and no common power to restrain human nature. The state of nature is a “war of all against all,” in which human beings constantly seek to destroy each other in an incessant pursuit for power. Life in the state of nature is “nasty, brutish and short.”

That is not the world I want to live in. I want a world of compassion, empathy and respect. But the better angels of our nature rely on thoughtfulness. They take time to come to their conclusions.

With its dense interconnectedness, the internet has created a culture of immediate reaction. We react without all the facts. We are disgusted and pissed off all the time. This is the era of “cancel” and “callout” culture. The court of public opinion is now less like an actual court and more like a school of sharks in a feeding frenzy.

We seem to think this is OK because for every post we see that makes us rage inside, we also see posts that make us gush and goo. Every hateful tweet we see is leavened with a link to a video that tugs at our heartstrings. We are quick to point out that, yes, there is the bad — but there is an equal amount of good. Either can go viral. Social media simply holds up a mirror that reflects the best and worst of us.

But that’s not really true. All these posts have one thing in common: They are digested too quickly to allow for thoughtfulness. Good or bad, happy or mad — we simply react and scroll down. FOMO continues to drive us forward to the next piece of emotionally charged clickbait. 

There’s a reason why social media is so addictive: All the content is aimed directly at our “Thinking Fast” hot buttons. And evolution has reinforced those hot buttons with generous discharges of neurocchemicals that act as emotional catalysts. Our brain online is a junkie jonesing for a fix of dopamine or noradrenaline or serotonin. We get our hit and move on.

Technology is hijacking our need to pause and reflect. Marshall McLuhan was right: The medium is the message and, in this case, the medium is one that is hardwired directly to the inner demons of our humanity.It took humans over five thousand years to become civilized. Ironically, one of our greatest achievements is dissembling that civilization faster than we think. Literally.