Avoiding the Truth: Dodging Reality through Social Media

“It’s very hard to imagine that the revival of Flat Earth theories could have happened without the internet.”

Keith Kahn-Harris – Sociologist and author of “Denial: The Unspeakable Truth” – in an interview on CBC Radio

On November 9, 2017, 400 people got together in Raleigh, North Carolina. They all believe the earth is flat. This November 15th and 16th, they will do it again in Denver, Colorado. If you are so inclined, you could even join other flat earthers for a cruise in 2019. The Flat Earth Society is a real thing. They have their own website. And – of course – they have their own Facebook page (actually, there seems to be a few pages. Apparently, there are Flat Earth Factions.)

Perhaps the most troubling thing is this: it isn’t a joke. These people really believe the earth is flat.

How can this happen in 2018? For the answer, we have to look inwards – and backwards – to discover a troubling fact about ourselves. We’re predisposed to believe stuff that isn’t true. And, as Mr. Kahn-Harris points out, this can become dangerous when we add an obsessively large dose of time spent online, particularly with social media.

It makes sense that there was an evolutionary advantage to a group of people who lived in the same area and dealt with the same environmental challenges to have the same basic understanding about things. These commonly held beliefs allowed group learnings to be passed down to the individual: eating those red berries would make you sick, wandering alone in the savannah was not a good idea, coveting thy neighbor’s wife might get you stabbed in the middle of the night. Our beliefs often saved our ass.

Because of this, it was in our interest to protect our beliefs. They formed part of our “fast” reasoning loop, not requiring our brain to kick in to do any processing. Cognitive scientists refer to this as “fluency”.  Our brains have evolved to be lazy. If they don’t have to work, they don’t. And in the adaptive environment we evolved in – for reasons already stated – this cognitive short cut generally worked to our benefit. Ask anyone who has had to surrender a long-held belief. It’s tough to do. Overturning a belief requires a lot of cognitive horsepower. It’s far easier to protect them with a scaffolding of supporting “facts” – no matter how shaky it may be.

Enter the Internet. And the usual suspect? Social media.

As I said last week, the truth is often hard to handle – especially if it runs head long into our beliefs. I don’t want to believe in climate change because the consequences of that truth are mind-numbingly frightening. But I find I’m forced to. I also don’t believe the earth is flat. For me, in both cases, the evidence is undeniable. That’s me, however. There are plenty of people who don’t believe climate change is real and – according to the Facebook Official Flat Earth Discussion group – there are at least 107,372 people that believe the earth is flat. The same evidence is also available to them. Why are we different?

When it comes to our belief structure, we all have different mindsets, plotted on a spectrum of credulity. I’m what you may call a scientific skeptic. I tend not to believe something is true unless I see empirical evidence supporting it. There are others who tend to believe in things at a much lower threshold. And this tendency is often found across multiple domains. The mindset that embraces creationism, for example, has been shown to also embrace conspiracy theories.

In the pre-digital world, our beliefs were a feature, not a bug. When we shared a physical space with others, we also relied on a shared “mind-space” that served us well. Common beliefs created a more cohesive social herd and were typically proven out over time against the reality of our environment. Beneficial beliefs were passed along and would become more popular, while non-beneficial beliefs were culled from the pack. It was the cognitive equivalent of Adam Smith’s “Invisible hand.” We created a belief marketplace.

Beliefs are moderated socially. The more unpopular our own personal beliefs, the more pressure there is to abandon them. There is a tipping point mechanism at work here. Again, in a physically defined social group, those whose mindsets tend to look for objective proof will be the first to abandon a belief that is obviously untrue. From this point forward, social contagion can be more effective factor in helping the new perspective spread through a population than the actual evidence. “What is true?” is not as important as “what does my neighbor believe to be true?”

This is where social media comes in. On Facebook, a community is defined in the mind, not in any particular physical space. Proximity becomes irrelevant. Online, we can always find others that believe in the same things we do. A Flat Earther can find comfort by going on a cruise with hundreds of other Flat Earthers and saying that a 107,372 people can’t be wrong. They can even point to “scientific” evidence proving their case. For example, if the earth wasn’t flat, a jetliner would have to continually point its nose down to keep from flying off into space (granted, this argument conveniently ignores gravity and all types of other physics, but why quibble).

Social media provides a progressive banquet of options for dealing with unpleasant truths. Probably the most benign of these is something I wrote about a few weeks back – slacktivism. At least slacktivisits acknowledge the truth. From there, you can progress to a filtering of facts (only acknowledging the truths you can handle), wilful ignorance (purposely avoiding the truth), denialism (rejecting the truth) and full out fantasizing (manufacturing an alternate set of facts). Examples of all these abound on social media.

In fact, the only thing that seems hard to find on Facebook is the bare, unfiltered, unaltered truth. And that’s probably because we’re not looking for it.


Why We No Longer Want to Know What’s True

“Truth isn’t truth” – Rudy Giuliani – August 19, 2018

Even without Giuliani’s bizarre statement, we’re developing a weird relationship with the truth. It’s becoming even more inconvenient. It’s certainly becoming more worrisome. I was chatting with a psychiatrist the other day who counsels seniors. I asked him if he was noticing more general anxiety in that generation – a feeling of helplessness with how the world seems to be going to hell in a handbasket. I asked him that because I am less optimistic about the future than I ever have been in my life. I wanted to know if that was unusual. He said it wasn’t – I had plenty of company.

You can pick the truth that is most unsettling. Personally, I lose sleep over climate change, the rise of populist politics and the resurgence of xenophobia. I have to limit the amount of news I consume in any day, because it sends me into a depressive state. I feel helpless. And as much as I’m limiting my intake because of my own mental health, I can’t help thinking that this is a dangerous path I’m heading down.

After doing a little research, I have found that things like PTSD (President Trump Stress Disorder) or TAD (Trump Anxiety Disorder) are real things. They’re recognized by the American Psychological Association. After a ten-year decline, anxiety levels in the US spiked dramatically after November, 2016.  Clinical psychologist Jennifer Panning, who coined TAD, says “the symptoms include feeling a loss of control and helplessness, and fretting about what’s happening in the country and spending excessive time on social media.”

But it’s not just the current political climate that’s causing anxiety. It’s also the climate itself. Enter “ecoanxiety.” Again…the APA in a recent paper nails a remarkably accurate diagnosis of how I’m feeling: “Gradual, long-term changes in climate can also surface a number of different emotions, including fear, anger, feelings of powerlessness, or exhaustion.”

“You can’t handle the truth” – Colonel Nathan R. Jessep (from the movie “A Few Good Men”)

So – when the truth scares the hell out of you – what do you do?  We can find a few clues in the quotes above. One is this idea of a loss of control. The other is spending excessive time on social media. My belief is that the later exacerbates the former.

In a sense, Rudy Giuliani is right. Truth isn’t truth, at least, not on the receiving end. We all interpret truth within the context of our own perceived reality. This in no way condones the manipulation of truth upstream from when it reaches us. We need to trust that our information sources are providing us the closest thing possible to a verifiable and objective view of truth.  But we have to accept the fact that for each of us, truth will ultimately be filtered through our own beliefs and understanding of what is real. Part of our own perceived reality is how in control we feel of the current situation. And this is where we begin to see the creeping levels of anxiety.

In 1954, psychologist Julian Rotter introduced the idea of a “locus of control” –the degree of control we believe we have over our own lives. For some of us, our locus is tipped to the internal side. We believe we are firmly at the wheel of our own lives. Others have an external locus, believing that life is left to forces beyond our control. But like most concepts in psychology, the locus of control is not a matter of black and white. It is a spectrum of varying shades of gray. And anxiety can arise when our view of reality seems to be beyond our own locus of control.

The word locus itself comes from the Latin for “place” or “location.” Typically, our control is exercised over those things that are physically close to us. And up until a 150 years ago, that worked well. We had little awareness of things beyond our own little world so we didn’t need to worry about them. But electronic media changed that. Suddenly, we were aware of wars, pestilence, poverty, famines and natural disasters from around the world. This made us part of Marshall McLuhan’s “Global Village.” The circle of our “locus of awareness” suddenly had to accommodate the entire world but our “locus of control” just couldn’t keep pace.

Even with this expansion of awareness, one could still say that truth remained relatively true. There was an editorial check and balance process that checked the veracity of the information we were presented. It certainly wasn’t perfect, but we could place some confidence in the truth of what we read, saw and heard.

And then came social media. Social media creates a nasty feedback loop when it comes to the truth. Once again, Dr. Panning typified these new anxieties as, “fretting about what’s happening in the country and spending excessive time on social media.” The algorithmic targeting of social media platforms means that you’re getting a filtered version of the truth. Facebook knows exactly what you’re most anxious about and feeds you a steady diet of content tailored specifically to those anxieties. We have the comfort of seeing posts from members of our network that seem to fear the same things we do and share the same beliefs. But the more time we spend seeking this comfort, the more we’re exposed to the anxiety inducing triggers and the further and further we drift from the truth. It creates a downward spiral that leads to these new types of environmental anxiety we are seeing. And to deal with those anxieties we’re developing new strategies for handling the truth – or, at least – our version of the truth. That’s where I’ll pick up next week.


Deconstructing the Google/Facebook Duopoly

We’ve all heard about it. The Google/Facebook Duopoly. This was what I was going to write about last week before I got sidetracked. I’m back on track now (or, at least, somewhat back on track). So let’s start by understanding what a duopoly is…

…a situation in which two suppliers dominate the market for a commodity or service.

And this, from Wikipedia…

… In practice, the term is also used where two firms have dominant control over a market.

So, to have a duopoly, you need two things: domination and control. First, let’s deal with the domination question. In 2017, Google and Facebook together took a very healthy 59% slice of all digital ad revenues in the US. Google captured 38.6% of that, with Facebook capturing 20%. That certainly seems dominant. But if online marketing is the market, that is a very large basket with a lot of different items thrown in. So, let’s do a broad categorization to help deconstruct this a bit.  Typically, when I try to understand marketing, I like to start with humans – or more specifically – what that lump of grey matter we call a brain is doing. And if we’re talking about marketing, we’re talking about attention – how our brains are engaging with our environment. That is an interesting way to divide up the market we’re talking about, because it neatly bisects the attentional market, with Google on one side and Facebook on the other.

Google dominates the top down, intent driven, attentionally focused market. If you’re part of this market, you have something in mind and you’re trying to find it. If we use search as a proxy for this attentional state (which is the best proxy I can think of) we see just how dominate Google is. It owns this market to a huge degree. According to Statista, Google has about 87% of the total worldwide search market in April of 2018. The key metric here is success. Google needs to be the best way to fulfill those searches. And if market share is any indication, it is.

Facebook apparently dominates the bottom up awareness market. These are the people killing time online and they are not actively looking with commercial intent. This is more of an awareness play where attention has to be diverted to an advertising message. Therefore, time spent becomes the key factor. You need to be in front of the right eyeballs, and so you need a lot of eyeballs and a way to target to the right ones.

Here is where things get interesting. If we look at share of consumer time, Google dominates here. But there is a huge caveat, which I’ll get to in a second. According to a report this spring by Pivotal Research, Google owns just under 28% of all the time we spend consuming digital content. Facebook has just over a 16% share of this market. So why do we have a duopoly and not a monopoly? It’s because of that caveat – a whopping slice of Google’s “time spent” dominance comes from YouTube. And YouTube has an entirely different attentional profile – one that’s much harder to present advertising against. When you’re watching a video on YouTube, your attention is “locked” on the video. Disrupting that attention erodes the user experience. So Google has had a tough time monetizing YouTube.

According to Seeking Alpha, Google’s search ad business will account for 68% of their total revenue of $77 billion this year. That’s over 52 billion dollars that is in that “top-down” attentionally focused bucket. YouTube, which is very much in the “bottom-up” disruptive bucket, accounts for $12 Billion in advertising revenues. Certainly nothing to sneeze at, but not on the same scale as Google’s search business. Facebook’s revenue, at about $36 B, is also generated by this same “bottom up” market, but they have a different attentional profile. The Facebook user is not as “locked in” as they are on YouTube. With the right targeting tools, something that Facebook has excelled at, you have a decent chance of gaining their attention long enough to notice your ad.


If we look at the second part of the definition of a duopoly – that of control – we see some potential chinks in the armor of both Google and Facebook. Typically, market control was in the form of physical constraints against the competition. But in this new type of market, the control can only be in the minds of the users. The barriers to competitive entry are all defined in mental terms.

In  Google’s case, they have a single line of defense: they have to be an unbreakable habit. Habits are mental scripts that depend on two things – obvious environmental cues that trigger habitual behavior and acceptable outcomes once the script completes. So, to maintain their habit, Google has to ensure that whatever environment you might be in when searching online for something, Google is just a click or two away. Additionally, they have to meet a certain threshold of success. Habits are tough to break, but there are two areas of vulnerability in Google’s dominance.

Facebook is a little different. They need to be addictive. This is a habit taken to the extreme. Addictions depend on pushing certain reward buttons in the brain that lead to an unhealthy behavioral script which become obsessive. The more addicted you are to Facebook and its properties, the more successful they will be in their dominance of the market. You can see the inherent contradiction here. Despite Facebook’s protests to the contrary, with their current revenue model they can only succeed at the expense of our mental health.

I find these things troubling. When you have two for-profit organizations fighting to dominate a market that is defined in our own minds, you have the potential for a lot of unhealthy corporate decisions.


Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.



Drifting Alone on the Social Network

This was not your ordinary Facebook post (if there is such a thing).

For one thing, it was long. Almost 1600 words long. That’s longer than this column. Secondly, it was raw. It was written by somebody in deep pain who laid their soul bare for their entire network to see. I barely knew this person and I was given a look into the deepest and darkest part of their lives. The post told the story of the break-up of a marriage and a struggle with depression. It was a disturbing blow – by – blow chronicle of someone hitting the bottom.

A strange thing happened while I was reading the post. At one level, I responded as I hope any decent human would. I felt the pain of this person – even though we were barely acquaintances – and wanted to help in some way. But – in a sort of meta-awareness – I monitored myself as a sample of one to see what the longer-term impact was. This plea through social media seemed extraordinary in a number of ways. What were the possible unintended consequences of this online confessional?

I should add an additional – traumatic – context to this story. This post was catalyzed by the recent suicide of a well-known member of the industry I used to work in. Again, I was made aware of the tragedy through several posts on Facebook. And again, I barely knew the person involved but somewhere along the line we had connected through Facebook. In the last two days of his life, he had updated his status. He was young. He had a family. He should have had everything to live for. But then again, I really didn’t know him or his circumstances. I certainly didn’t know his pain. Judging by the shock I was in the comments on Facebook, I don’t think any of us knew.

And that’s what prompted this post I’m writing about. Obviously, this person wanted us to know his pain. He was asking for help. But he was also offering it to anyone who needed it.  And he choose to do it through Facebook. This should be social media at its finest…a moving example of people connecting when it counts most. The post certainly touched those that read it. 80 comments – all supportive – followed the post. Many contained their own abbreviated confessions of going through similar pain. It seemed cathartic. I would even call it inspirational.

So why was I so troubled by this? Something seemed wrong.

Social Networks are Built on Weak Ties

Perhaps the problem is in the nature of our online social networks. In the 1990’s British anthropologist Robin Dunbar suggested our brains had a cognitive limit on the number of stable social relationships we could maintain. The number was 150, which has since become known as Dunbar’s Number.

In follow up research, released in the last few years, Dunbar has found that within this circle of 150 acquaintances, there are smaller circles of increasingly more intimate friends. The next layer in is what we would probably call “friends” – people we chose to spend time with. That’s about 50 people.  Then we have “close friends” – people we tend to socialize with more frequently. On average, we would have 15 of these. And finally, we have our closest friends – those we are intimately connected to. Dunbar puts our cognitive limit at 5 for these most precious connections.

I have about 450 “Friends” on Facebook. If Dunbar’s Number is correct, this is three times the number of social connections I can mentally coordinate. By necessity, they’ll almost all what Mark Granovetter would refer to as “weak ties” – social connections that are not actively maintained. And my network is relatively small. Others in the online industry typically have social networks numbering well over a thousand connections. Yet, with all these thousands of connections, did they not have one of those very close friends they could reach out to in person? Perhaps they did, but the personal investment might have been too high.

The Psychology of the Online Confessional

We all need to be heard. And sometimes, it seems easier to confide in a stranger than a friend. We can talk without worrying about all the baggage we are carrying. Our closest friends know all about that baggage. The personal costs are much higher when we choose to go to a friend. I think- subconsciously – we sometimes tend to gravitate towards “weak ties” when things are at their worst. It’s the reason that psychotherapists and confessional booths exist.

Also, a confession is easier when it’s physically detached from the feedback. We can craft the language before we post. We are not sitting across from someone who might judge us. We are posting alone, and this can bring its own sense of comfort. But, unfortunately, that comfort can be short lived.

The Half Life of Online Empathy

Eventually, the empathy dies away and the social shaming begins. I wish this wasn’t’ the case – I wish humans were better than this – but we’re not. We’re just human.

If you’re not an absolute sociopath, you can’t help but be empathetic when someone lays their grieving soul bare for you. And the investment required to post a supportive comment is minimal. It is determined by the same cognitive algorithm I talked about last week regarding “slacktivism.” It’s a few seconds of our life and a handful of carefully selected words. At the time, we are probably sincere in our offer of help, but then we move on. This is a weak tie – a person we hardly know. We have no skin in the game.

If that seems callous and cruel on my part, there are previous examples to point to. Over and over again, we pour out our support when the pain is fresh, only to move on to the next thing more and more quickly. This is true when the tragedies are global in nature. I suspect the same is true when they’re more localized, with people we are passingly acquainted with. And these people have now gone public with their pain. It is now part of their digital footprint. Today, we may feel nothing but empathy. But how will we feel 6 weeks hence? Or 6 months? I would like to think we would remain noble, kind and gracious in our thoughts, but most of the evidence points to the contrary.

I didn’t want to be negative in the writing of this. I sincerely hope that such online pleas for help bring aid and comfort to the person in question. As I said, this was all sparked by someone who never got the help he needed at the right time. Perhaps a weak tie online is better than no tie at all.

But I will remain a strong believer in the power of a true person-to-person connection – with all its messiness and organic imperfection. We need more of that. And the more time we spend alone keying in our thoughts in front of the light blue glow of a monitor, the less likely that is to happen.


The Pros and Cons of Slacktivism

Lately, I’ve grown to hate my Facebook feed. But I’m also morbidly fascinated by it. It fuels the fires of my discontent with a steady stream of posts about bone-headedness and sheer WTF behavior.

As it turns out, I’m not alone. Many of us are morally outraged by our social media feeds. But does all that righteous indignation lead to anything?

Last week, MediaPost reran a column talking about how good people can turn bad online by following the path of moral outrage to mob-based violence. Today I ask, is there a silver lining to this behavior? Can the digital tipping point become a force for good, pushing us to take action to right wrongs?

The Ever-Touchier Triggers of Moral Outrage

As I’ve written before, normal things don’t go viral. The more outrageous and morally reprehensible something is, the greater likelihood there is that it will be shared on social media. So the triggering forces of moral outrage are becoming more common and more exaggerated. A study found that in our typical lives, only about 5% of the things we experience are immoral in nature.

But our social media feeds are algorithmically loaded to ensure we are constantly ticked off. This isn’t normal. Nor is it healthy.

The Dropping Cost of Being Outraged

So what do we do when outraged? As it turns out, not much — at least, not when we’re on Facebook.

Yale neuroscientist Molly Crockett studies the emerging world on online morality. And she found that the personal costs associated with expressing moral outrage are dropping as we move our protests online:

“Offline, people can harm wrongdoers’ reputations through gossip, or directly confront them with verbal sanctions or physical aggression. The latter two methods require more effort and also carry potential physical risks for the punisher. In contrast, people can express outrage online with just a few keystrokes, from the comfort of their bedrooms…”

What Crockett is describing is called slacktivism.

You May Be a Slacktivist if…

A slacktivist, according to Urbandictionary.com, is “one who vigorously posts political propaganda and petitions in an effort to affect change in the world without leaving the comfort of the computer screen”

If your Facebook feed is at all like mine, it’s probably become choked with numerous examples of slacktivism. It seems like the world has become a more moral — albeit heavily biased — place. This should be a good thing, shouldn’t it?

Warning: Outrage Can be Addictive

The problem is that morality moves online, it loses a lot of the social clout it has historically had to modify behaviors. Crockett explains:

“When outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression…”

In other words, outrage can become addictive. It’s easier to become outraged if it has no consequences for us, is divorced by the normal societal checks and balances that govern our behavior and we can get a nice little ego boost when others “like” or “share” our indignant rants. The last point is particularly true given the “echo chamber” characteristics of our social-media bubbles. These are all the prerequisites required to foster habitual behavior.

Outrage Locked Inside its own Echo Chamber

Another thing we have to realize about showing our outrage online is that it’s largely a pointless exercise. We are simply preaching to the choir. As Crockett points out:

“Ideological segregation online prevents the targets of outrage from receiving messages that could induce them (and like-minded others) to change their behavior. For politicized issues, moral disapproval ricochets within echo chambers but only occasionally escapes.”

If we are hoping to change anyone’s behavior by publicly shaming them, we have to realize that Facebook’s algorithms make this highly unlikely.

Still, the question remains: Does all this online indignation serve a useful purpose? Does it push us to action?

The answer seems to be dependent on two factors, both imposing their own thresholds on our likelihood to act. One is if we’re truly outraged or not. Because showing outrage online is so easy, with few consequences and the potential social reward of a post going viral, it has all the earmarks of a habit-forming behavior. Are we posting because we’re truly mad, or just bored?

“Just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged,” writes Crockett.

Moving from online outrage to physical action — whether it’s changing our own behavior or acting to influence a change in someone else – requires a much bigger personal investment on almost every level. This brings us to the second threshold factor: our own personal experiences and situation. Millions of women upped the ante by actively supporting #Metoo because it was intensely personal for them. It’s one example of an online movement that became one of the most potent political forces in recent memory.

One thing does appear to be true. When it comes to social protest, there is definitely more noise out there. We just need a reliable way to convert that to action.

Why Do Good People Become Bad Online?

Here are some questions I have:

  • When do crowds turn ugly?
  • Why do people become Trolls online?
  • When do opinions suddenly become moralizing and what is the difference between the two?

Whether we like it or not, online connection engenders some decidedly bad behavior. It’s one of those unintended consequences that I like to talk about – a behavioral side effect that’s catalyzed by technology.  And, if this is the case, we should know a little more about the psychology behind this behavior.

Modified Mob Behavior

So, when does a group become a mob? And when does a mob turn ugly? There are some aspects of herd mentality that seem to be particularly conducive to online connections. A group turns into a mob when their behaviors become synced to a common purpose. A recent study from the University of Southern California found two predictive signals in social media behavior that indicate when a group protest may become a violent mob.

Tipping Over the Threshold from an Opinion to a Moral

One of the things they found is that when we go from talking about our opinions to preaching morality, things can take a nasty turn. Let’s imagine a spectrum from loosely held opinions on the left end – things you’re not that emotionally invested in – to beliefs and then on the morals at the right end. This progression also correlates to different ways the brain processes the respective thoughts. At the least intense left end of the spectrum – opinions – we can process them with relative detached rationality. But as we move to the right, different parts of the brain start kicking in and begin to raise the emotional stakes. When we believe we’re talking about morals, we suddenly have strongly held beliefs about what is right and what is wrong. Morals are defined as “concerned with the principles of right and wrong behavior and the goodness or badness of human character.”

This triggers our ancient and universal feelings about fairness, Harm, betrayal, subversion and degradation – the planks of moral foundation theory. The researchers in the USC study found that people are more likely to endorse violence when they moralize the issue. When there are clearly held beliefs about right and wrong, violence seems acceptable.

Violence Needs Company

This moralizing signal is not necessarily tied to being online. But the second predictive signal is. The researchers also found that if people believe others share their views, they are more likely to tip over the threshold from peaceful protest to violence. This is Mark Granovetter’s crowd threshold effect that I’ve talked about before. In social media, this effect is amplified by content filtering and the structure of your network. Like-minded people naturally link to each other and their posts make for remarkably efficient indicators of their beliefs. It’s very easy in a social network to feel that everybody you know feels the same way that you do. The degree of violent language can escalate quickly through online posts until the entire group is pushed over the threshold into a model of behavior that would be unthinkable as a disconnected individual.

Trolls, Trolls Everywhere

Another study, this time from Stanford, shows that any of us can become a troll. We would like to think that trolls are just a particularly ubiquitous small group of horrible people. But this research indicates that trollism is more situational than previously thought. In other words, if we’re in a bad mood, we’re more likely to become a troll.

But it’s not just our mood. Here again Granovetter’s threshold model plays a part. Negative comments beget more negative comments, starting a downward spiral of venom. The researchers did a behavioral test where participants had to do either an easy or a difficult task and then had to read an online article that had either three neutral comments or three negative, troll-like comments. The results were eye-opening. In the group that was assigned an easy task and read the article that had the neutral comments, about 35% posted a negative comment. Knowing that one in three of us seem to have a low threshold for becoming a troll is not exactly encouraging, but it gets worse. If participants either did the difficult test or read negative comments, the likelihood for posting a troll-like comment jumped to 50%. And if participants got both the difficult test and read negative comments, the number climbed to 68%! In the three-part study, another factor that could lead to becoming a troll included the time that posts were made. Late Sunday and Monday nights are the worst time of the week for negative posts and Twitter bullying hit its peak between 5 pm and 8 pm on Sunday. While we’re on the subject, Donald Trump tends to tweet early in the morning and his most inflammatory tweets come on Saturdays.

But when it comes to trolling, there’s something else at play here as well. Yet another study, this time from Mt. Helen University in Australia, found that our own brand of empathy can also predict whether we’re going to become a troll or not. There is cognitive and affective empathy. Cognitive empathy means you can understand other people’s emotions – you know what will make them happy or mad. But affective empathy means you can internalize and experience the emotions of another – if they’re happy, you’re happy. If they’re mad, you’re mad. Not surprisingly, Trolls tend to have high cognitive empathy but low affective empathy. Obviously, there were plenty of such people before the Internet, but they’ve now gained the perfect forum for their twisted form of empathy. They can incite negativity relatively free from social consequence and reprisal. Even if the comments made are not anonymous, the poster can hide behind a degree of detachment that would be impossible in a physical environment.

So, why should we care? Again, it comes back to this idea of the unintended social consequences of technology. Increasingly, our connections are digital in nature. And for reasons already stated, I worry that these types of connections may bring out the worst in us.