Avoiding the Truth: Dodging Reality through Social Media

“It’s very hard to imagine that the revival of Flat Earth theories could have happened without the internet.”

Keith Kahn-Harris – Sociologist and author of “Denial: The Unspeakable Truth” – in an interview on CBC Radio

On November 9, 2017, 400 people got together in Raleigh, North Carolina. They all believe the earth is flat. This November 15th and 16th, they will do it again in Denver, Colorado. If you are so inclined, you could even join other flat earthers for a cruise in 2019. The Flat Earth Society is a real thing. They have their own website. And – of course – they have their own Facebook page (actually, there seems to be a few pages. Apparently, there are Flat Earth Factions.)

Perhaps the most troubling thing is this: it isn’t a joke. These people really believe the earth is flat.

How can this happen in 2018? For the answer, we have to look inwards – and backwards – to discover a troubling fact about ourselves. We’re predisposed to believe stuff that isn’t true. And, as Mr. Kahn-Harris points out, this can become dangerous when we add an obsessively large dose of time spent online, particularly with social media.

It makes sense that there was an evolutionary advantage to a group of people who lived in the same area and dealt with the same environmental challenges to have the same basic understanding about things. These commonly held beliefs allowed group learnings to be passed down to the individual: eating those red berries would make you sick, wandering alone in the savannah was not a good idea, coveting thy neighbor’s wife might get you stabbed in the middle of the night. Our beliefs often saved our ass.

Because of this, it was in our interest to protect our beliefs. They formed part of our “fast” reasoning loop, not requiring our brain to kick in to do any processing. Cognitive scientists refer to this as “fluency”.  Our brains have evolved to be lazy. If they don’t have to work, they don’t. And in the adaptive environment we evolved in – for reasons already stated – this cognitive short cut generally worked to our benefit. Ask anyone who has had to surrender a long-held belief. It’s tough to do. Overturning a belief requires a lot of cognitive horsepower. It’s far easier to protect them with a scaffolding of supporting “facts” – no matter how shaky it may be.

Enter the Internet. And the usual suspect? Social media.

As I said last week, the truth is often hard to handle – especially if it runs head long into our beliefs. I don’t want to believe in climate change because the consequences of that truth are mind-numbingly frightening. But I find I’m forced to. I also don’t believe the earth is flat. For me, in both cases, the evidence is undeniable. That’s me, however. There are plenty of people who don’t believe climate change is real and – according to the Facebook Official Flat Earth Discussion group – there are at least 107,372 people that believe the earth is flat. The same evidence is also available to them. Why are we different?

When it comes to our belief structure, we all have different mindsets, plotted on a spectrum of credulity. I’m what you may call a scientific skeptic. I tend not to believe something is true unless I see empirical evidence supporting it. There are others who tend to believe in things at a much lower threshold. And this tendency is often found across multiple domains. The mindset that embraces creationism, for example, has been shown to also embrace conspiracy theories.

In the pre-digital world, our beliefs were a feature, not a bug. When we shared a physical space with others, we also relied on a shared “mind-space” that served us well. Common beliefs created a more cohesive social herd and were typically proven out over time against the reality of our environment. Beneficial beliefs were passed along and would become more popular, while non-beneficial beliefs were culled from the pack. It was the cognitive equivalent of Adam Smith’s “Invisible hand.” We created a belief marketplace.

Beliefs are moderated socially. The more unpopular our own personal beliefs, the more pressure there is to abandon them. There is a tipping point mechanism at work here. Again, in a physically defined social group, those whose mindsets tend to look for objective proof will be the first to abandon a belief that is obviously untrue. From this point forward, social contagion can be more effective factor in helping the new perspective spread through a population than the actual evidence. “What is true?” is not as important as “what does my neighbor believe to be true?”

This is where social media comes in. On Facebook, a community is defined in the mind, not in any particular physical space. Proximity becomes irrelevant. Online, we can always find others that believe in the same things we do. A Flat Earther can find comfort by going on a cruise with hundreds of other Flat Earthers and saying that a 107,372 people can’t be wrong. They can even point to “scientific” evidence proving their case. For example, if the earth wasn’t flat, a jetliner would have to continually point its nose down to keep from flying off into space (granted, this argument conveniently ignores gravity and all types of other physics, but why quibble).

Social media provides a progressive banquet of options for dealing with unpleasant truths. Probably the most benign of these is something I wrote about a few weeks back – slacktivism. At least slacktivisits acknowledge the truth. From there, you can progress to a filtering of facts (only acknowledging the truths you can handle), wilful ignorance (purposely avoiding the truth), denialism (rejecting the truth) and full out fantasizing (manufacturing an alternate set of facts). Examples of all these abound on social media.

In fact, the only thing that seems hard to find on Facebook is the bare, unfiltered, unaltered truth. And that’s probably because we’re not looking for it.

 

Why We No Longer Want to Know What’s True

“Truth isn’t truth” – Rudy Giuliani – August 19, 2018

Even without Giuliani’s bizarre statement, we’re developing a weird relationship with the truth. It’s becoming even more inconvenient. It’s certainly becoming more worrisome. I was chatting with a psychiatrist the other day who counsels seniors. I asked him if he was noticing more general anxiety in that generation – a feeling of helplessness with how the world seems to be going to hell in a handbasket. I asked him that because I am less optimistic about the future than I ever have been in my life. I wanted to know if that was unusual. He said it wasn’t – I had plenty of company.

You can pick the truth that is most unsettling. Personally, I lose sleep over climate change, the rise of populist politics and the resurgence of xenophobia. I have to limit the amount of news I consume in any day, because it sends me into a depressive state. I feel helpless. And as much as I’m limiting my intake because of my own mental health, I can’t help thinking that this is a dangerous path I’m heading down.

After doing a little research, I have found that things like PTSD (President Trump Stress Disorder) or TAD (Trump Anxiety Disorder) are real things. They’re recognized by the American Psychological Association. After a ten-year decline, anxiety levels in the US spiked dramatically after November, 2016.  Clinical psychologist Jennifer Panning, who coined TAD, says “the symptoms include feeling a loss of control and helplessness, and fretting about what’s happening in the country and spending excessive time on social media.”

But it’s not just the current political climate that’s causing anxiety. It’s also the climate itself. Enter “ecoanxiety.” Again…the APA in a recent paper nails a remarkably accurate diagnosis of how I’m feeling: “Gradual, long-term changes in climate can also surface a number of different emotions, including fear, anger, feelings of powerlessness, or exhaustion.”

“You can’t handle the truth” – Colonel Nathan R. Jessep (from the movie “A Few Good Men”)

So – when the truth scares the hell out of you – what do you do?  We can find a few clues in the quotes above. One is this idea of a loss of control. The other is spending excessive time on social media. My belief is that the later exacerbates the former.

In a sense, Rudy Giuliani is right. Truth isn’t truth, at least, not on the receiving end. We all interpret truth within the context of our own perceived reality. This in no way condones the manipulation of truth upstream from when it reaches us. We need to trust that our information sources are providing us the closest thing possible to a verifiable and objective view of truth.  But we have to accept the fact that for each of us, truth will ultimately be filtered through our own beliefs and understanding of what is real. Part of our own perceived reality is how in control we feel of the current situation. And this is where we begin to see the creeping levels of anxiety.

In 1954, psychologist Julian Rotter introduced the idea of a “locus of control” –the degree of control we believe we have over our own lives. For some of us, our locus is tipped to the internal side. We believe we are firmly at the wheel of our own lives. Others have an external locus, believing that life is left to forces beyond our control. But like most concepts in psychology, the locus of control is not a matter of black and white. It is a spectrum of varying shades of gray. And anxiety can arise when our view of reality seems to be beyond our own locus of control.

The word locus itself comes from the Latin for “place” or “location.” Typically, our control is exercised over those things that are physically close to us. And up until a 150 years ago, that worked well. We had little awareness of things beyond our own little world so we didn’t need to worry about them. But electronic media changed that. Suddenly, we were aware of wars, pestilence, poverty, famines and natural disasters from around the world. This made us part of Marshall McLuhan’s “Global Village.” The circle of our “locus of awareness” suddenly had to accommodate the entire world but our “locus of control” just couldn’t keep pace.

Even with this expansion of awareness, one could still say that truth remained relatively true. There was an editorial check and balance process that checked the veracity of the information we were presented. It certainly wasn’t perfect, but we could place some confidence in the truth of what we read, saw and heard.

And then came social media. Social media creates a nasty feedback loop when it comes to the truth. Once again, Dr. Panning typified these new anxieties as, “fretting about what’s happening in the country and spending excessive time on social media.” The algorithmic targeting of social media platforms means that you’re getting a filtered version of the truth. Facebook knows exactly what you’re most anxious about and feeds you a steady diet of content tailored specifically to those anxieties. We have the comfort of seeing posts from members of our network that seem to fear the same things we do and share the same beliefs. But the more time we spend seeking this comfort, the more we’re exposed to the anxiety inducing triggers and the further and further we drift from the truth. It creates a downward spiral that leads to these new types of environmental anxiety we are seeing. And to deal with those anxieties we’re developing new strategies for handling the truth – or, at least – our version of the truth. That’s where I’ll pick up next week.

 

Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.

 

 

The Pros and Cons of Slacktivism

Lately, I’ve grown to hate my Facebook feed. But I’m also morbidly fascinated by it. It fuels the fires of my discontent with a steady stream of posts about bone-headedness and sheer WTF behavior.

As it turns out, I’m not alone. Many of us are morally outraged by our social media feeds. But does all that righteous indignation lead to anything?

Last week, MediaPost reran a column talking about how good people can turn bad online by following the path of moral outrage to mob-based violence. Today I ask, is there a silver lining to this behavior? Can the digital tipping point become a force for good, pushing us to take action to right wrongs?

The Ever-Touchier Triggers of Moral Outrage

As I’ve written before, normal things don’t go viral. The more outrageous and morally reprehensible something is, the greater likelihood there is that it will be shared on social media. So the triggering forces of moral outrage are becoming more common and more exaggerated. A study found that in our typical lives, only about 5% of the things we experience are immoral in nature.

But our social media feeds are algorithmically loaded to ensure we are constantly ticked off. This isn’t normal. Nor is it healthy.

The Dropping Cost of Being Outraged

So what do we do when outraged? As it turns out, not much — at least, not when we’re on Facebook.

Yale neuroscientist Molly Crockett studies the emerging world on online morality. And she found that the personal costs associated with expressing moral outrage are dropping as we move our protests online:

“Offline, people can harm wrongdoers’ reputations through gossip, or directly confront them with verbal sanctions or physical aggression. The latter two methods require more effort and also carry potential physical risks for the punisher. In contrast, people can express outrage online with just a few keystrokes, from the comfort of their bedrooms…”

What Crockett is describing is called slacktivism.

You May Be a Slacktivist if…

A slacktivist, according to Urbandictionary.com, is “one who vigorously posts political propaganda and petitions in an effort to affect change in the world without leaving the comfort of the computer screen”

If your Facebook feed is at all like mine, it’s probably become choked with numerous examples of slacktivism. It seems like the world has become a more moral — albeit heavily biased — place. This should be a good thing, shouldn’t it?

Warning: Outrage Can be Addictive

The problem is that morality moves online, it loses a lot of the social clout it has historically had to modify behaviors. Crockett explains:

“When outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression…”

In other words, outrage can become addictive. It’s easier to become outraged if it has no consequences for us, is divorced by the normal societal checks and balances that govern our behavior and we can get a nice little ego boost when others “like” or “share” our indignant rants. The last point is particularly true given the “echo chamber” characteristics of our social-media bubbles. These are all the prerequisites required to foster habitual behavior.

Outrage Locked Inside its own Echo Chamber

Another thing we have to realize about showing our outrage online is that it’s largely a pointless exercise. We are simply preaching to the choir. As Crockett points out:

“Ideological segregation online prevents the targets of outrage from receiving messages that could induce them (and like-minded others) to change their behavior. For politicized issues, moral disapproval ricochets within echo chambers but only occasionally escapes.”

If we are hoping to change anyone’s behavior by publicly shaming them, we have to realize that Facebook’s algorithms make this highly unlikely.

Still, the question remains: Does all this online indignation serve a useful purpose? Does it push us to action?

The answer seems to be dependent on two factors, both imposing their own thresholds on our likelihood to act. One is if we’re truly outraged or not. Because showing outrage online is so easy, with few consequences and the potential social reward of a post going viral, it has all the earmarks of a habit-forming behavior. Are we posting because we’re truly mad, or just bored?

“Just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged,” writes Crockett.

Moving from online outrage to physical action — whether it’s changing our own behavior or acting to influence a change in someone else – requires a much bigger personal investment on almost every level. This brings us to the second threshold factor: our own personal experiences and situation. Millions of women upped the ante by actively supporting #Metoo because it was intensely personal for them. It’s one example of an online movement that became one of the most potent political forces in recent memory.

One thing does appear to be true. When it comes to social protest, there is definitely more noise out there. We just need a reliable way to convert that to action.

Tempest in a Tweet-Pot

On February 16, a Facebook VP of Ads named Rob Goldman had a bad day. That was the day the office of Special Counsel, Robert Mueller, released an indictment of 13 Russian operatives who interfered in the U.S. election. Goldman felt he had to comment via a series of tweets that appeared to question the seriousness with which the Mueller investigation had considered the ads placed by Russians on Facebook. Nothing much happened for the rest of the day. But on February 17, after the US Tweeter-in-Chief – Donald Trump – picked up the thread, Facebook realized the tweets had turned into a “shit sandwich” and to limit the damage, Goldman had to officially apologize.

It’s just one more example of a personal tweet blowing into a major news event. This is happening with increasingly irritating frequency. So today, I thought I’d explore why.

Personal Brand vs Corporate Brand

First, why did Rob Goldman feel he had to go public with his views anyway? He did because he could. We all have varying degrees of loyalty to our employer and I’m sure the same is true for Mr. Goldman. Otherwise he wouldn’t have swallowed crow a few days later with his public mea culpa. But our true loyalties go not to the brand we work for, but the brand we are. Goldman – like me, like you, like all of us – is building his personal brand. Anyone who’s says they’re not – yet posts anything online – is in denial. Goldman’s brand, according to his twitter account, is “Student, seeker, raconteur, burner. ENFP.” That is followed with the disclaimer “Views are mine.” And you know what? This whole debacle has been great for Goldman’s brand, at least in terms of audience size. Before February 16th, he had about 1500 followers. When I checked, that had swelled to almost 12,000. Brand Goldman is on a roll!

The idea of a personal brand is new – just a few decades old. It really became amplified through the use of social media. Suddenly, you could have an audience -and not just any audience, but an audience numbering in the millions.

Before that, the only people who could have been said to have personal brands were artists, authors and musicians. They made their living by sharing who they were with us.

For the rest of us, our brands were trapped in our own contexts. Only the people who knew us were exposed to our brands. But the amplification of social media suddenly exposes our brand to a much broader audience. And when things go viral, like they did on February 17, millions suddenly became aware of Rob Goldman and his tweet without knowing anything more than that he was a VP of Ads for Facebook.

It was that connection that created the second issue for Goldman. When we speak for our own personal brands, we can say, “views are mine” but the problem always comes when things blow up, as they did for Rob Goldman. None of his tweets were passed by anyone at Facebook, yet he had suddenly become a spokesperson for the corporation. And for those eager to accept his tweets as fact, they suddenly became the “truth.”

Twitter: “Truth” Without Context

Increasingly, we’re not really that interested in the truth. What we are interested in is our beliefs and our own personal truth. This is the era of “Post Truth” – the Oxford Dictionary word of the year for 2016 – defined as “relating to or denoting circumstances in which objective facts are less influential in shaping public opinion than appeals to emotion and personal belief’.

Truth was a commonly understood base that could be supported by facts. Now, truth is in the eye of the beholder. Common understandings are increasingly difficult to come to as the world continues to fragment and become more complex. How can we possibly come to a common understanding of what is “true” when any issue worth discussing is complex? This is certainly true of the Mueller investigation. To try to distill the scope of it to 900 words – about the length of this column – would be virtually impossible. To reduce it to 280 characters – the limits of a tweet and one- twentieth the length of this column – well, there we should not tread. But, of course, we do.

This problem is exacerbated by the medium itself. Twitter is a channel that encourages “quipiness.” When we’re tweeting, we all want to be Oscar Wilde. Again, writing this column usually takes me 3 to 4 hours, including time to do some research, create a rough outline and then do the actual writing. That’s not an especially long time, but the process does allow some time for mental reflection and self-editing. The average tweet takes less than a minute to write – probably less to think about – and then it’s out there, a matter of record, irretrievable. You should find it more than a little terrifying that this is a chosen medium for the President of the United States and one that is increasingly forming our world-view.

Twitter is also not a medium that provides much support for irony, sarcasm or satire. In the Post-Truth era, we usually accept tweets as facts, especially when they come from someone who is a somewhat official position, as in the case of Rob Goldman. But at best, they’re abbreviated opinions.

In the light of all this, one has to appreciate Mr. Goldman’s Twitter handle: @robjective.

Short Sightedness, Sharks and Mental Myopia

2017 was an average year for shark attacks.

And this just in…

By the year 2050 half of the World will be Near Sighted.

What could these two headlines possibly have in common? Well, sit back – I’ll tell you.

First, let’s look at why 2017 was a decidedly non-eventful year – at least when it came to interactions between Selachimorpha (sharks) and Homo (us). Nothing unusual happened. That’s it. There was no sudden spike in Jaws-like incidents. Sharks didn’t suddenly disappear from the world’s oceans. Everything was just – average. Was it the only way that 2017 was uneventful? No. There were others. But we didn’t notice because we were focused on the ways that the world seemed to be going to hell in a handbasket. If we look at 2017 like a bell curve, we were focused on the outliers, not the middle.

There’s no shame in that. That’s what we do. The usual doesn’t make the nightly news. It doesn’t even make our Facebook feed. But here’s the thing..we live most of our live in the middle of the curve, not in the outlier extremes. The things that are most relevant to our lives falls squarely into the usual. But all the communication channels that have been built to channel information to us are focused on the unusual. And that’s because we insist not on being informed, but instead on being amused.

In 1985, Neil Postman wrote the book Amusing Ourselves to Death. In it, he charts how the introduction of electronic media – especially television – hastened our decline into a dystopian existence that shared more than a few parallels with Aldous Huxley’s Brave New World. His warning was pointed, to say the least, “ There are two ways by which the spirit of a culture may be shrivelled,” Postman says. “In the first—the Orwellian—culture becomes a prison. In the second—the Huxleyan—culture becomes a burlesque.” It’s probably worth reminding ourselves of what burlesque means, “a literary or dramatic work that seeks to ridicule by means of grotesque exaggeration or comic imitation.” If the transformation of our culture into burlesque seemed apparent in the 80’s, you’d pretty much have to say it’s a fait accompli 35 years later. Grotesque exaggeration is the new normal., not to mention the new president.

But this steering of our numbed senses towards the extremes has some consequences. As the world becomes more extreme, it requires more extreme events to catch our notice. We are spending more and more of our media consumption time amongst the outliers. And that brings up the second problem.

Extremes – by their nature – tend to be ideologically polarized as well. If we’re going to consider extremes that carry a politically charged message, we stick to the extremes that are well synced with our worldview. In cognitive terms, these ideas are “fluent” – they’re easier to process. The more polarized and extreme a message is, the more important it is that it be fluent for us. We also are more likely to filter out non-fluent messages – messages that we don’t happen to agree with.

The third problem is that we are becoming short-sighted (see, I told you I’d get there, eventually). So not only do we look for extremes, we are increasingly seeking out the trivial. We do so because being informed is increasingly scaring the bejeezus out of us. We don’t look too deep nor do we look too far in the future – because the future is scary. There is the collapse of our climate, World War III with North Korea, four more years of Trump…this stuff is terrifying. Increasingly we spend our cognitive resources looking things that are amusing and immediate. The information we seek has to provide immediate gratification. Yes, we are becoming physically short-sighted because we stare at screens too much, but we’re also becoming mentally myopic as well.

If all this is disturbing, don’t worry. Just grab a Soma and enjoy a Feelie.

Sorry, I Don’t Speak Complexity

I was reading about an interesting study from Cornell this week. Dr. Morton Christianson, Co-Director of Cornell’s Cognitive Science Program, and his colleagues explored an interesting linguistic paradox – languages that a lot of people speak – like English and Mandarin – have large vocabularies but relatively simple grammar. Languages that are smaller and more localized have fewer words but more complex grammatical rules.

The reason, Christensen found, has to do with the ease of learning. It doesn’t take much to learn a new word. A couple of exposures and you’ve assimilated it. Because of this, new words become memes that tend to propagate quickly through the population. But the foundations of grammar are much more difficult to understand and learn. It takes repeated exposures and an application of effort to learn them.

Language is a shared cultural component that depends on the structure of a network. We get an inside view of network dynamics from investigating the spread of language. Let’s look at the complexity of a syntactic rule, for example. These are the rules that govern sentence structure, word order and punctuation. In terms of learnability, syntax offers much more complexity than simply understanding the definition of a word. In order to learn syntax, you need repeated exposures to it. And this is where the structure and scope of a network comes in. As Dr. Christensen explains,

“If you have to have multiple exposures to, say, a complex syntactic rule, in smaller communities it’s easier for it to spread and be maintained in the population.”

This research seems to indicate that cultural complexity is first spawned in heavily interlinked and relatively intimate network nodes. For these memes – whether they be language, art, philosophies or ideologies – to bridge to and spread through the greater network, they are often simplified so they’re easier to assimilate.

If this is true, then we have to consider what might happen as our world becomes more interconnected. Will there be a collective “dumbing down” of culture? If current events are any indication, that certainly seems to be the case. The memes with the highest potential to spread are absurdly simple. No effort on the part of the receiver is required to understand them.

But there is a counterpoint to this that does hold out some hope. As Christensen reminds us, “People can self-organize into smaller communities to counteract that drive toward simplification.” From this emerges an interesting yin and yang of cultural content creation. You have more highly connected nodes independent of geography that are producing some truly complex content. But, because of the high threshold of assimilation required, the complexity becomes trapped in that node. The only things that escape are fragments of that content that can be simplified to the point where they can go viral through the greater network. But to do so, they have to be stripped of their context.

This is exactly what caused the language paradox that the team explored. If you have a wide network – or a large population of speakers – there are a greater number of nodes producing new content. In this instance, the words are the fragments, which can be assimilated, and the grammar is the context that gets left behind.

There is another aspect of this to consider. Because of these dynamics unique to a large and highly connected network, the simple and trivial naturally rises to the top. Complexity gets trapped beneath the surface, imprisoned in isolated nodes within the network. But this doesn’t mean complexity goes away – it just fragments and becomes more specific to the node in which it originated. The network loses a common understanding and definition of that complexity. We lose our shared ideological touchstones, which are by necessity more complex.

If we speculate on where this might go in the future, it’s not unreasonable to expect to see an increase in tribalism in matters related to any type of complexity – like religion or politics – and a continuing expansion of simple cultural memes.

The only time we may truly come together as a society is to share a video of a cat playing basketball.