The Strange Polarity of Facebook’s Moral Compass

For Facebook, 2018 came in like a lion, and went out like a really pissed off  Godzilla with a savagely bad hangover after the Mother of all New Year’s Eve parties.  In other words, it was not a good year.

As Zuckerberg’s 2018 shuddered to its close, it was disclosed that Facebook and Friends had opened our personal data kimonos for any of their “premier” partners. This was in direct violation of their own data privacy policy, which makes it even more reprehensible than usual. This wasn’t a bone-headed fumbling of our personal information. This was a fully intentional plan to financially benefit from that data in a way we didn’t agree to, hide that fact from us and then deliberately lie about it on more than one occasion.

I was listening to a radio interview of this latest revelation and one of the analysts  – social media expert and author Alexandria Samuel – mused about when it was that Facebook lost its moral compass. She has been familiar with the company since its earliest days, having the opportunity to talk to Mark Zuckerberg personally. In her telling, Zuckerberg is an evangelist that had lost his way, drawn to the dark side by the corporate curse of profit and greed.

But Siva Vaidhyanathan – the Robertson Professor of Modern Media Studies at the University of Virgina –  tells a different story. And it’s one that seems much more plausible to me. Zuckerberg may indeed be an evangelist, although I suspect he’s more of a megalomaniac. Either way, he does have a mission. And that mission is not opposed to corporate skullduggery. It fully embraces it. Zuckerberg believes he’s out to change the world, while making a shitload of money along the way. And he’s fine with that.

That came as a revelation to me. I spent a good part of 2018 wondering how Facebook could have been so horrendously cavalier with our personal data. I put it down to corporate malfeasance. Public companies are not usually paragons of ethical efficacy. This is especially true when ethics and profitability are diametrically opposed to each other. This is the case with Facebook. In order for Facebook to maintain profitability with its current revenue model, it has to do things with our private data we’d rather not know about.

But even given the moral vacuum that can be found in most corporate boardrooms, Facebook’s brand of hubris in the face of increasingly disturbing revelations seems off-note – out of kilter with the normal damage control playbook. Vaidhyanathan’s analysis brings that cognitive dissonance into focus. And it’s a picture that is disturbing on many levels.

siva v photo

Siva Vaidhyanathan

According to Vaidhyanathan, “Zuckerberg has two core principles from which he has never wavered. They are the founding tenets of Facebook. First, the more people use Facebook for more reasons for more time of the day the better those people will be. …  Zuckerberg truly believes that Facebook benefits humanity and we should use it more, not less. What’s good for Facebook is good for the world and vice-versa.

Second, Zuckerberg deeply believes that the records of our interests, opinions, desires, and interactions with others should be shared as widely as possible so that companies like Facebook can make our lives better for us – even without our knowledge or permission.”

Mark Zuckerberg is not the first tech company founder to have a seemingly ruthless god complex and a “bigger than any one of us” mission. Steve Jobs, Bill Gates, Larry Page, Larry Ellison; I could go on. What is different this time is that Zuckerberg’s chosen revenue model runs completely counter to the idea of personal privacy. Yes, Google makes money from advertising, but the vast majority of that is delivered in response to a very intentional and conscious request on the part of the user. Facebook’s gaping vulnerability is that it can only be profitable by doing things of which we’re unaware. As Vaidhyanathan says, “violating our privacy is in Facebook’s DNA.”

Which all leads to the question, “Are we okay with that?” I’ve been thinking about that myself. Obviously, I’m not okay with it. I just spent 720 words telling you so. But will I strip my profile from the platform?

I’m not sure. Give me a week to think about it.

Avoiding the Truth: Dodging Reality through Social Media

“It’s very hard to imagine that the revival of Flat Earth theories could have happened without the internet.”

Keith Kahn-Harris – Sociologist and author of “Denial: The Unspeakable Truth” – in an interview on CBC Radio

On November 9, 2017, 400 people got together in Raleigh, North Carolina. They all believe the earth is flat. This November 15th and 16th, they will do it again in Denver, Colorado. If you are so inclined, you could even join other flat earthers for a cruise in 2019. The Flat Earth Society is a real thing. They have their own website. And – of course – they have their own Facebook page (actually, there seems to be a few pages. Apparently, there are Flat Earth Factions.)

Perhaps the most troubling thing is this: it isn’t a joke. These people really believe the earth is flat.

How can this happen in 2018? For the answer, we have to look inwards – and backwards – to discover a troubling fact about ourselves. We’re predisposed to believe stuff that isn’t true. And, as Mr. Kahn-Harris points out, this can become dangerous when we add an obsessively large dose of time spent online, particularly with social media.

It makes sense that there was an evolutionary advantage to a group of people who lived in the same area and dealt with the same environmental challenges to have the same basic understanding about things. These commonly held beliefs allowed group learnings to be passed down to the individual: eating those red berries would make you sick, wandering alone in the savannah was not a good idea, coveting thy neighbor’s wife might get you stabbed in the middle of the night. Our beliefs often saved our ass.

Because of this, it was in our interest to protect our beliefs. They formed part of our “fast” reasoning loop, not requiring our brain to kick in to do any processing. Cognitive scientists refer to this as “fluency”.  Our brains have evolved to be lazy. If they don’t have to work, they don’t. And in the adaptive environment we evolved in – for reasons already stated – this cognitive short cut generally worked to our benefit. Ask anyone who has had to surrender a long-held belief. It’s tough to do. Overturning a belief requires a lot of cognitive horsepower. It’s far easier to protect them with a scaffolding of supporting “facts” – no matter how shaky it may be.

Enter the Internet. And the usual suspect? Social media.

As I said last week, the truth is often hard to handle – especially if it runs head long into our beliefs. I don’t want to believe in climate change because the consequences of that truth are mind-numbingly frightening. But I find I’m forced to. I also don’t believe the earth is flat. For me, in both cases, the evidence is undeniable. That’s me, however. There are plenty of people who don’t believe climate change is real and – according to the Facebook Official Flat Earth Discussion group – there are at least 107,372 people that believe the earth is flat. The same evidence is also available to them. Why are we different?

When it comes to our belief structure, we all have different mindsets, plotted on a spectrum of credulity. I’m what you may call a scientific skeptic. I tend not to believe something is true unless I see empirical evidence supporting it. There are others who tend to believe in things at a much lower threshold. And this tendency is often found across multiple domains. The mindset that embraces creationism, for example, has been shown to also embrace conspiracy theories.

In the pre-digital world, our beliefs were a feature, not a bug. When we shared a physical space with others, we also relied on a shared “mind-space” that served us well. Common beliefs created a more cohesive social herd and were typically proven out over time against the reality of our environment. Beneficial beliefs were passed along and would become more popular, while non-beneficial beliefs were culled from the pack. It was the cognitive equivalent of Adam Smith’s “Invisible hand.” We created a belief marketplace.

Beliefs are moderated socially. The more unpopular our own personal beliefs, the more pressure there is to abandon them. There is a tipping point mechanism at work here. Again, in a physically defined social group, those whose mindsets tend to look for objective proof will be the first to abandon a belief that is obviously untrue. From this point forward, social contagion can be more effective factor in helping the new perspective spread through a population than the actual evidence. “What is true?” is not as important as “what does my neighbor believe to be true?”

This is where social media comes in. On Facebook, a community is defined in the mind, not in any particular physical space. Proximity becomes irrelevant. Online, we can always find others that believe in the same things we do. A Flat Earther can find comfort by going on a cruise with hundreds of other Flat Earthers and saying that a 107,372 people can’t be wrong. They can even point to “scientific” evidence proving their case. For example, if the earth wasn’t flat, a jetliner would have to continually point its nose down to keep from flying off into space (granted, this argument conveniently ignores gravity and all types of other physics, but why quibble).

Social media provides a progressive banquet of options for dealing with unpleasant truths. Probably the most benign of these is something I wrote about a few weeks back – slacktivism. At least slacktivisits acknowledge the truth. From there, you can progress to a filtering of facts (only acknowledging the truths you can handle), wilful ignorance (purposely avoiding the truth), denialism (rejecting the truth) and full out fantasizing (manufacturing an alternate set of facts). Examples of all these abound on social media.

In fact, the only thing that seems hard to find on Facebook is the bare, unfiltered, unaltered truth. And that’s probably because we’re not looking for it.

 

Why We No Longer Want to Know What’s True

“Truth isn’t truth” – Rudy Giuliani – August 19, 2018

Even without Giuliani’s bizarre statement, we’re developing a weird relationship with the truth. It’s becoming even more inconvenient. It’s certainly becoming more worrisome. I was chatting with a psychiatrist the other day who counsels seniors. I asked him if he was noticing more general anxiety in that generation – a feeling of helplessness with how the world seems to be going to hell in a handbasket. I asked him that because I am less optimistic about the future than I ever have been in my life. I wanted to know if that was unusual. He said it wasn’t – I had plenty of company.

You can pick the truth that is most unsettling. Personally, I lose sleep over climate change, the rise of populist politics and the resurgence of xenophobia. I have to limit the amount of news I consume in any day, because it sends me into a depressive state. I feel helpless. And as much as I’m limiting my intake because of my own mental health, I can’t help thinking that this is a dangerous path I’m heading down.

After doing a little research, I have found that things like PTSD (President Trump Stress Disorder) or TAD (Trump Anxiety Disorder) are real things. They’re recognized by the American Psychological Association. After a ten-year decline, anxiety levels in the US spiked dramatically after November, 2016.  Clinical psychologist Jennifer Panning, who coined TAD, says “the symptoms include feeling a loss of control and helplessness, and fretting about what’s happening in the country and spending excessive time on social media.”

But it’s not just the current political climate that’s causing anxiety. It’s also the climate itself. Enter “ecoanxiety.” Again…the APA in a recent paper nails a remarkably accurate diagnosis of how I’m feeling: “Gradual, long-term changes in climate can also surface a number of different emotions, including fear, anger, feelings of powerlessness, or exhaustion.”

“You can’t handle the truth” – Colonel Nathan R. Jessep (from the movie “A Few Good Men”)

So – when the truth scares the hell out of you – what do you do?  We can find a few clues in the quotes above. One is this idea of a loss of control. The other is spending excessive time on social media. My belief is that the later exacerbates the former.

In a sense, Rudy Giuliani is right. Truth isn’t truth, at least, not on the receiving end. We all interpret truth within the context of our own perceived reality. This in no way condones the manipulation of truth upstream from when it reaches us. We need to trust that our information sources are providing us the closest thing possible to a verifiable and objective view of truth.  But we have to accept the fact that for each of us, truth will ultimately be filtered through our own beliefs and understanding of what is real. Part of our own perceived reality is how in control we feel of the current situation. And this is where we begin to see the creeping levels of anxiety.

In 1954, psychologist Julian Rotter introduced the idea of a “locus of control” –the degree of control we believe we have over our own lives. For some of us, our locus is tipped to the internal side. We believe we are firmly at the wheel of our own lives. Others have an external locus, believing that life is left to forces beyond our control. But like most concepts in psychology, the locus of control is not a matter of black and white. It is a spectrum of varying shades of gray. And anxiety can arise when our view of reality seems to be beyond our own locus of control.

The word locus itself comes from the Latin for “place” or “location.” Typically, our control is exercised over those things that are physically close to us. And up until a 150 years ago, that worked well. We had little awareness of things beyond our own little world so we didn’t need to worry about them. But electronic media changed that. Suddenly, we were aware of wars, pestilence, poverty, famines and natural disasters from around the world. This made us part of Marshall McLuhan’s “Global Village.” The circle of our “locus of awareness” suddenly had to accommodate the entire world but our “locus of control” just couldn’t keep pace.

Even with this expansion of awareness, one could still say that truth remained relatively true. There was an editorial check and balance process that checked the veracity of the information we were presented. It certainly wasn’t perfect, but we could place some confidence in the truth of what we read, saw and heard.

And then came social media. Social media creates a nasty feedback loop when it comes to the truth. Once again, Dr. Panning typified these new anxieties as, “fretting about what’s happening in the country and spending excessive time on social media.” The algorithmic targeting of social media platforms means that you’re getting a filtered version of the truth. Facebook knows exactly what you’re most anxious about and feeds you a steady diet of content tailored specifically to those anxieties. We have the comfort of seeing posts from members of our network that seem to fear the same things we do and share the same beliefs. But the more time we spend seeking this comfort, the more we’re exposed to the anxiety inducing triggers and the further and further we drift from the truth. It creates a downward spiral that leads to these new types of environmental anxiety we are seeing. And to deal with those anxieties we’re developing new strategies for handling the truth – or, at least – our version of the truth. That’s where I’ll pick up next week.

 

Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.

 

 

Drifting Alone on the Social Network

This was not your ordinary Facebook post (if there is such a thing).

For one thing, it was long. Almost 1600 words long. That’s longer than this column. Secondly, it was raw. It was written by somebody in deep pain who laid their soul bare for their entire network to see. I barely knew this person and I was given a look into the deepest and darkest part of their lives. The post told the story of the break-up of a marriage and a struggle with depression. It was a disturbing blow – by – blow chronicle of someone hitting the bottom.

A strange thing happened while I was reading the post. At one level, I responded as I hope any decent human would. I felt the pain of this person – even though we were barely acquaintances – and wanted to help in some way. But – in a sort of meta-awareness – I monitored myself as a sample of one to see what the longer-term impact was. This plea through social media seemed extraordinary in a number of ways. What were the possible unintended consequences of this online confessional?

I should add an additional – traumatic – context to this story. This post was catalyzed by the recent suicide of a well-known member of the industry I used to work in. Again, I was made aware of the tragedy through several posts on Facebook. And again, I barely knew the person involved but somewhere along the line we had connected through Facebook. In the last two days of his life, he had updated his status. He was young. He had a family. He should have had everything to live for. But then again, I really didn’t know him or his circumstances. I certainly didn’t know his pain. Judging by the shock I was in the comments on Facebook, I don’t think any of us knew.

And that’s what prompted this post I’m writing about. Obviously, this person wanted us to know his pain. He was asking for help. But he was also offering it to anyone who needed it.  And he choose to do it through Facebook. This should be social media at its finest…a moving example of people connecting when it counts most. The post certainly touched those that read it. 80 comments – all supportive – followed the post. Many contained their own abbreviated confessions of going through similar pain. It seemed cathartic. I would even call it inspirational.

So why was I so troubled by this? Something seemed wrong.

Social Networks are Built on Weak Ties

Perhaps the problem is in the nature of our online social networks. In the 1990’s British anthropologist Robin Dunbar suggested our brains had a cognitive limit on the number of stable social relationships we could maintain. The number was 150, which has since become known as Dunbar’s Number.

In follow up research, released in the last few years, Dunbar has found that within this circle of 150 acquaintances, there are smaller circles of increasingly more intimate friends. The next layer in is what we would probably call “friends” – people we chose to spend time with. That’s about 50 people.  Then we have “close friends” – people we tend to socialize with more frequently. On average, we would have 15 of these. And finally, we have our closest friends – those we are intimately connected to. Dunbar puts our cognitive limit at 5 for these most precious connections.

I have about 450 “Friends” on Facebook. If Dunbar’s Number is correct, this is three times the number of social connections I can mentally coordinate. By necessity, they’ll almost all what Mark Granovetter would refer to as “weak ties” – social connections that are not actively maintained. And my network is relatively small. Others in the online industry typically have social networks numbering well over a thousand connections. Yet, with all these thousands of connections, did they not have one of those very close friends they could reach out to in person? Perhaps they did, but the personal investment might have been too high.

The Psychology of the Online Confessional

We all need to be heard. And sometimes, it seems easier to confide in a stranger than a friend. We can talk without worrying about all the baggage we are carrying. Our closest friends know all about that baggage. The personal costs are much higher when we choose to go to a friend. I think- subconsciously – we sometimes tend to gravitate towards “weak ties” when things are at their worst. It’s the reason that psychotherapists and confessional booths exist.

Also, a confession is easier when it’s physically detached from the feedback. We can craft the language before we post. We are not sitting across from someone who might judge us. We are posting alone, and this can bring its own sense of comfort. But, unfortunately, that comfort can be short lived.

The Half Life of Online Empathy

Eventually, the empathy dies away and the social shaming begins. I wish this wasn’t’ the case – I wish humans were better than this – but we’re not. We’re just human.

If you’re not an absolute sociopath, you can’t help but be empathetic when someone lays their grieving soul bare for you. And the investment required to post a supportive comment is minimal. It is determined by the same cognitive algorithm I talked about last week regarding “slacktivism.” It’s a few seconds of our life and a handful of carefully selected words. At the time, we are probably sincere in our offer of help, but then we move on. This is a weak tie – a person we hardly know. We have no skin in the game.

If that seems callous and cruel on my part, there are previous examples to point to. Over and over again, we pour out our support when the pain is fresh, only to move on to the next thing more and more quickly. This is true when the tragedies are global in nature. I suspect the same is true when they’re more localized, with people we are passingly acquainted with. And these people have now gone public with their pain. It is now part of their digital footprint. Today, we may feel nothing but empathy. But how will we feel 6 weeks hence? Or 6 months? I would like to think we would remain noble, kind and gracious in our thoughts, but most of the evidence points to the contrary.

I didn’t want to be negative in the writing of this. I sincerely hope that such online pleas for help bring aid and comfort to the person in question. As I said, this was all sparked by someone who never got the help he needed at the right time. Perhaps a weak tie online is better than no tie at all.

But I will remain a strong believer in the power of a true person-to-person connection – with all its messiness and organic imperfection. We need more of that. And the more time we spend alone keying in our thoughts in front of the light blue glow of a monitor, the less likely that is to happen.

 

The Pros and Cons of Slacktivism

Lately, I’ve grown to hate my Facebook feed. But I’m also morbidly fascinated by it. It fuels the fires of my discontent with a steady stream of posts about bone-headedness and sheer WTF behavior.

As it turns out, I’m not alone. Many of us are morally outraged by our social media feeds. But does all that righteous indignation lead to anything?

Last week, MediaPost reran a column talking about how good people can turn bad online by following the path of moral outrage to mob-based violence. Today I ask, is there a silver lining to this behavior? Can the digital tipping point become a force for good, pushing us to take action to right wrongs?

The Ever-Touchier Triggers of Moral Outrage

As I’ve written before, normal things don’t go viral. The more outrageous and morally reprehensible something is, the greater likelihood there is that it will be shared on social media. So the triggering forces of moral outrage are becoming more common and more exaggerated. A study found that in our typical lives, only about 5% of the things we experience are immoral in nature.

But our social media feeds are algorithmically loaded to ensure we are constantly ticked off. This isn’t normal. Nor is it healthy.

The Dropping Cost of Being Outraged

So what do we do when outraged? As it turns out, not much — at least, not when we’re on Facebook.

Yale neuroscientist Molly Crockett studies the emerging world on online morality. And she found that the personal costs associated with expressing moral outrage are dropping as we move our protests online:

“Offline, people can harm wrongdoers’ reputations through gossip, or directly confront them with verbal sanctions or physical aggression. The latter two methods require more effort and also carry potential physical risks for the punisher. In contrast, people can express outrage online with just a few keystrokes, from the comfort of their bedrooms…”

What Crockett is describing is called slacktivism.

You May Be a Slacktivist if…

A slacktivist, according to Urbandictionary.com, is “one who vigorously posts political propaganda and petitions in an effort to affect change in the world without leaving the comfort of the computer screen”

If your Facebook feed is at all like mine, it’s probably become choked with numerous examples of slacktivism. It seems like the world has become a more moral — albeit heavily biased — place. This should be a good thing, shouldn’t it?

Warning: Outrage Can be Addictive

The problem is that morality moves online, it loses a lot of the social clout it has historically had to modify behaviors. Crockett explains:

“When outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression…”

In other words, outrage can become addictive. It’s easier to become outraged if it has no consequences for us, is divorced by the normal societal checks and balances that govern our behavior and we can get a nice little ego boost when others “like” or “share” our indignant rants. The last point is particularly true given the “echo chamber” characteristics of our social-media bubbles. These are all the prerequisites required to foster habitual behavior.

Outrage Locked Inside its own Echo Chamber

Another thing we have to realize about showing our outrage online is that it’s largely a pointless exercise. We are simply preaching to the choir. As Crockett points out:

“Ideological segregation online prevents the targets of outrage from receiving messages that could induce them (and like-minded others) to change their behavior. For politicized issues, moral disapproval ricochets within echo chambers but only occasionally escapes.”

If we are hoping to change anyone’s behavior by publicly shaming them, we have to realize that Facebook’s algorithms make this highly unlikely.

Still, the question remains: Does all this online indignation serve a useful purpose? Does it push us to action?

The answer seems to be dependent on two factors, both imposing their own thresholds on our likelihood to act. One is if we’re truly outraged or not. Because showing outrage online is so easy, with few consequences and the potential social reward of a post going viral, it has all the earmarks of a habit-forming behavior. Are we posting because we’re truly mad, or just bored?

“Just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged,” writes Crockett.

Moving from online outrage to physical action — whether it’s changing our own behavior or acting to influence a change in someone else – requires a much bigger personal investment on almost every level. This brings us to the second threshold factor: our own personal experiences and situation. Millions of women upped the ante by actively supporting #Metoo because it was intensely personal for them. It’s one example of an online movement that became one of the most potent political forces in recent memory.

One thing does appear to be true. When it comes to social protest, there is definitely more noise out there. We just need a reliable way to convert that to action.

Why Do Good People Become Bad Online?

Here are some questions I have:

  • When do crowds turn ugly?
  • Why do people become Trolls online?
  • When do opinions suddenly become moralizing and what is the difference between the two?

Whether we like it or not, online connection engenders some decidedly bad behavior. It’s one of those unintended consequences that I like to talk about – a behavioral side effect that’s catalyzed by technology.  And, if this is the case, we should know a little more about the psychology behind this behavior.

Modified Mob Behavior

So, when does a group become a mob? And when does a mob turn ugly? There are some aspects of herd mentality that seem to be particularly conducive to online connections. A group turns into a mob when their behaviors become synced to a common purpose. A recent study from the University of Southern California found two predictive signals in social media behavior that indicate when a group protest may become a violent mob.

Tipping Over the Threshold from an Opinion to a Moral

One of the things they found is that when we go from talking about our opinions to preaching morality, things can take a nasty turn. Let’s imagine a spectrum from loosely held opinions on the left end – things you’re not that emotionally invested in – to beliefs and then on the morals at the right end. This progression also correlates to different ways the brain processes the respective thoughts. At the least intense left end of the spectrum – opinions – we can process them with relative detached rationality. But as we move to the right, different parts of the brain start kicking in and begin to raise the emotional stakes. When we believe we’re talking about morals, we suddenly have strongly held beliefs about what is right and what is wrong. Morals are defined as “concerned with the principles of right and wrong behavior and the goodness or badness of human character.”

This triggers our ancient and universal feelings about fairness, Harm, betrayal, subversion and degradation – the planks of moral foundation theory. The researchers in the USC study found that people are more likely to endorse violence when they moralize the issue. When there are clearly held beliefs about right and wrong, violence seems acceptable.

Violence Needs Company

This moralizing signal is not necessarily tied to being online. But the second predictive signal is. The researchers also found that if people believe others share their views, they are more likely to tip over the threshold from peaceful protest to violence. This is Mark Granovetter’s crowd threshold effect that I’ve talked about before. In social media, this effect is amplified by content filtering and the structure of your network. Like-minded people naturally link to each other and their posts make for remarkably efficient indicators of their beliefs. It’s very easy in a social network to feel that everybody you know feels the same way that you do. The degree of violent language can escalate quickly through online posts until the entire group is pushed over the threshold into a model of behavior that would be unthinkable as a disconnected individual.

Trolls, Trolls Everywhere

Another study, this time from Stanford, shows that any of us can become a troll. We would like to think that trolls are just a particularly ubiquitous small group of horrible people. But this research indicates that trollism is more situational than previously thought. In other words, if we’re in a bad mood, we’re more likely to become a troll.

But it’s not just our mood. Here again Granovetter’s threshold model plays a part. Negative comments beget more negative comments, starting a downward spiral of venom. The researchers did a behavioral test where participants had to do either an easy or a difficult task and then had to read an online article that had either three neutral comments or three negative, troll-like comments. The results were eye-opening. In the group that was assigned an easy task and read the article that had the neutral comments, about 35% posted a negative comment. Knowing that one in three of us seem to have a low threshold for becoming a troll is not exactly encouraging, but it gets worse. If participants either did the difficult test or read negative comments, the likelihood for posting a troll-like comment jumped to 50%. And if participants got both the difficult test and read negative comments, the number climbed to 68%! In the three-part study, another factor that could lead to becoming a troll included the time that posts were made. Late Sunday and Monday nights are the worst time of the week for negative posts and Twitter bullying hit its peak between 5 pm and 8 pm on Sunday. While we’re on the subject, Donald Trump tends to tweet early in the morning and his most inflammatory tweets come on Saturdays.

But when it comes to trolling, there’s something else at play here as well. Yet another study, this time from Mt. Helen University in Australia, found that our own brand of empathy can also predict whether we’re going to become a troll or not. There is cognitive and affective empathy. Cognitive empathy means you can understand other people’s emotions – you know what will make them happy or mad. But affective empathy means you can internalize and experience the emotions of another – if they’re happy, you’re happy. If they’re mad, you’re mad. Not surprisingly, Trolls tend to have high cognitive empathy but low affective empathy. Obviously, there were plenty of such people before the Internet, but they’ve now gained the perfect forum for their twisted form of empathy. They can incite negativity relatively free from social consequence and reprisal. Even if the comments made are not anonymous, the poster can hide behind a degree of detachment that would be impossible in a physical environment.

So, why should we care? Again, it comes back to this idea of the unintended social consequences of technology. Increasingly, our connections are digital in nature. And for reasons already stated, I worry that these types of connections may bring out the worst in us.

 

Sorry, I Don’t Speak Complexity

I was reading about an interesting study from Cornell this week. Dr. Morton Christianson, Co-Director of Cornell’s Cognitive Science Program, and his colleagues explored an interesting linguistic paradox – languages that a lot of people speak – like English and Mandarin – have large vocabularies but relatively simple grammar. Languages that are smaller and more localized have fewer words but more complex grammatical rules.

The reason, Christensen found, has to do with the ease of learning. It doesn’t take much to learn a new word. A couple of exposures and you’ve assimilated it. Because of this, new words become memes that tend to propagate quickly through the population. But the foundations of grammar are much more difficult to understand and learn. It takes repeated exposures and an application of effort to learn them.

Language is a shared cultural component that depends on the structure of a network. We get an inside view of network dynamics from investigating the spread of language. Let’s look at the complexity of a syntactic rule, for example. These are the rules that govern sentence structure, word order and punctuation. In terms of learnability, syntax offers much more complexity than simply understanding the definition of a word. In order to learn syntax, you need repeated exposures to it. And this is where the structure and scope of a network comes in. As Dr. Christensen explains,

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

This research seems to indicate that cultural complexity is first spawned in heavily interlinked and relatively intimate network nodes. For these memes – whether they be language, art, philosophies or ideologies – to bridge to and spread through the greater network, they are often simplified so they’re easier to assimilate.

If this is true, then we have to consider what might happen as our world becomes more interconnected. Will there be a collective “dumbing down” of culture? If current events are any indication, that certainly seems to be the case. The memes with the highest potential to spread are absurdly simple. No effort on the part of the receiver is required to understand them.

But there is a counterpoint to this that does hold out some hope. As Christensen reminds us, “People can self-organize into smaller communities to counteract that drive toward simplification.” From this emerges an interesting yin and yang of cultural content creation. You have more highly connected nodes independent of geography that are producing some truly complex content. But, because of the high threshold of assimilation required, the complexity becomes trapped in that node. The only things that escape are fragments of that content that can be simplified to the point where they can go viral through the greater network. But to do so, they have to be stripped of their context.

This is exactly what caused the language paradox that the team explored. If you have a wide network – or a large population of speakers – there are a greater number of nodes producing new content. In this instance, the words are the fragments, which can be assimilated, and the grammar is the context that gets left behind.

There is another aspect of this to consider. Because of these dynamics unique to a large and highly connected network, the simple and trivial naturally rises to the top. Complexity gets trapped beneath the surface, imprisoned in isolated nodes within the network. But this doesn’t mean complexity goes away – it just fragments and becomes more specific to the node in which it originated. The network loses a common understanding and definition of that complexity. We lose our shared ideological touchstones, which are by necessity more complex.

If we speculate on where this might go in the future, it’s not unreasonable to expect to see an increase in tribalism in matters related to any type of complexity – like religion or politics – and a continuing expansion of simple cultural memes.

The only time we may truly come together as a society is to share a video of a cat playing basketball.

 

 

The Decentralization of Trust

Forget Bitcoin. It’s a symptom. Forget even Blockchain. It’s big – but it’s technology. That makes it a tool. Which means it’s used at our will. And that will is the real story. Our will is always the real story – why do we build the tools we do? What is revolutionary is that we’ve finally found a way to decentralize trust. That runs against the very nature of how we’ve defined trust for centuries.

And that’s the big deal.

Trust began by being very intimate – ruled by our instincts in a face-to-face context. But for the last thousand years, our history has been all about concentration and the mass of everything – including whom we trust. We have consolidated our defense, our government, our commerce and our culture. In doing so, we have also consolidated our trust in a few all-powerful institutions.

But the past 20 years have been all about decentralization and tearing down power structures, as we invent new technologies to let us do that. In that vien, Blockchain is a doozy. It will change everything. But it’s only a big deal because we’re exerting our will to make it a big deal. And the “why” behind that is what I’m focusing on.

For right or wrong, we have now decided we’d rather trust distribution than centralization. There is much evidence to support that view. Concentration of power also means concentration of risk. The opportunity for corruption skyrockets. Big things tend to rot from the inside out. This is not a new discovery on our part. We’ve known for at least a few centuries that “absolute power corrupts absolutely.”

As the world consolidated it also became more corrupt. But it was always a trade off we felt we had to make. Again, the collective will of the people is the story thread to follow here. Consolidation brought many benefits. We wouldn’t be where we are today if it wasn’t for hierarchies, in one form or another. So we willing subjugated ourselves to someone – somewhere – hoping to maintain a delicate balance where the risk of corruption was outweighed by a personal gain. I remember asking the Atlantic’s noted correspondent, James Fallows, a question when I met him once in China. I asked how the average Chinese citizen could tolerate the paradoxical mix of rampant economical entrepreneurialism and crushing ideological totalitarianism. His answer was, “As long as their lives are better today than they were yesterday, and promise to be even better tomorrow, they’ll tolerate it.”

That pretty much summarizes our attitudes towards control. We tolerated it because if we wanted our lives to continue to improve, we really didn’t have a choice. But perhaps we do now. And that possibility has pushed our collective will away from consolidated power hubs and towards decentralized networks. Blockchain gives us another way to do that. It promises a way to work around Big Money, Big Banks, Big Government and Big Business. We are eager to do so. Why? Because up to now we have had to place our trust in these centralized institutions and that trust has been consistently abused. But perhaps Blockchain technology has found a way to distribute trust in a foolproof way. It appears to offer a way to make everything better without the historic tradeoff of subjugating ourselves to anyone.

However, when we move our trust to a network we also make that trust subject to unanticipated network effects. That may be the new trade-off we have to make. Increasingly, our technology is dependent on networks, which – by their nature – are complex adaptive systems. That’s why I keep preaching the same message – we have to understand complexity. We must accept that complexity has interaction affects we could never successfully predict.

It’s an interesting swap to consider – control for complexity. Control has always offered us the faint comfort of an illusion of predictability. We hoped that someone who knew more than we did was manning the controls. This is new territory for us. Will it be better? Who can say? But we seem to building an irreversible head of steam in that direction.

Which Me am I — And On Which Network?

I got an email from Strava. If you’re not familiar with it, Strava is a social network for cyclists and runners. As the former, I joined Strava about two years ago.

Here is the email I received:

Your Friends Are on Strava

 Add friends to follow their adventures and get inspired by their workouts

 J. Doe, Somewhere, CA

 “Follow”

 (Note: the personal information has been changed because after preaching about privacy for the last two weeks, I do have to practice what I preach)

Here’s the thing: I’m not friends with Mr. Doe. I met him a few  times on the speaking circuit when we crossed paths. To be brutally honest, J. Doe was a connection I thought would help me grow my business. He was a higher profile speaker than I was. He’d written a book that sold way more copies than mine ever did. I was “friending up” in my networking.

The last time we met each other — several years ago now — I quickly extended a Facebook friends invite. At the time, I — and the rest of the world — was using Facebook as a catch-all bucket for all my social connections: friends, family and the people I was unabashedly stalking in order to make more money. And J. Doe accepted my invite. It gave my ego a nice little boost at the time.

So, according to Facebook, we’re friends. But we’re not — not really. And that became clear when I got the Strava invite. It would have been really weird if I connected with him on Strava, following his adventures and being inspired by his workouts. We just don’t have that type of relationship. There was no social basis for me to make that connection.

I have different social spheres in my life. I have the remnants of my past professional life as an online marketer. I have my passion as a cyclist. I have a new emerging sphere as a fledgling tourism operator. I have my family.

I could go on. I can think of only a handful of people who comfortably lie within two or more of my spheres.

But with social sign-ins (which I used for Strava) those spheres are suddenly mashed together. It’s becoming clear that socially, we are complex creatures with many, many sides.

Facebook would love nothing more than to be the sole supporting platform of our entire social grid. But that works at cross purposes with how humans socialize. It’s not a monolithic, one-size-fits-all thing, but a sprawling landscape cluttered with very distinctive nodes that are haphazardly linked together.

The only common denominator is ourselves, in the middle of that mess. And even we can have surprising variability. The me that loves cycling is a very different guy from the me that wanted to grow my business profile.

This modality is creating an expansion of socially connected destinations.

Strava is a good example of this. Arguably, it provides a way to track my rides. But it also aspires to be the leading community of athletes. And that’s where it runs headlong into the problem of social modality.

Social sign-ins seem to be a win-win-win. For the user, it eases the headache of maintaining an ever-expanding list of user names and passwords. Sure, there’s that momentary lurch in the pit of our stomachs when we get that warning that we’re sharing our entire lives with the proprietors of the new site, but that goes away with just one little click.

For the website owner, every new social sign-in user comes complete with rich new data and access to all his contacts.  Finally, Facebook can sink their talons into us just a little deeper, gathering data from yet one more online outpost.

But like many things that seem beneficial, unintended consequences are part of the package. This is especially true when the third party I’m signing up for is creating his own community.

Is the “me” that wants to become part of this new community the “me” that Facebook thinks I am? Will things get weird when these two social spheres are mashed together?

Because Facebook assumes that I am always me and you are always you, whatever the context, some of us are forced to splinter our online social personas by maintaining multiple profiles. We may have a work profile and a social one.

The person Facebook thinks we are may be significantly different from the person LinkedIn thinks we are.  Keeping our social selves separate becomes a juggling act of ever-increasing proportions.

So why does Facebook want me to always be me?  It’s because of us — and by us, I mean marketers. We love the idea of markets that are universal and targeting that is omniscient. It just makes our lives so much easier. Our lives as marketers, I mean.

As people? Well, that’s another story — but right now, I’m a marketer.

See the problem?