Our Trust Issues with Advertising Based Revenue Models

Facebook’s in the soup again. They’re getting their hands slapped for tracking our location. And I have to ask, why is anyone surprised they’re tracking our location? I’ve said this before, but I’ll say it again. What is good for us is not good for Facebook’s revenue model. And vice versa. Social platforms should never be driven by advertising. Period. Advertising requires targeting. And when you combine prospect targeting and the digital residue of our online activities, bad things are bound to happen. It’s inevitable, and it’s going to get worse. Facebook’s future earnings absolutely dictate that they have to try to get us to spend more time on their platform and they have to be more invasive about tracking what we do with that time. Their walled data garden and their reluctance to give us a peak at what’s happening inside should be massive red flags.

Our social activities are already starting to fragment across multiple platforms – and multiple accounts within each of those platforms. We are socially complex people and it’s naïve to think that all that complexity could be contained within any one ecosystem – even one as sprawling as Facebook’s.  In our real lives – you know – the life you lead when you’re not staring at your phone – our social activities are as varied as our moods, our activities, our environment and the people we are currently sharing that environment with. Being social is not a single aspect of our lives. It is the connecting tissue of all that we are. It binds all the things we do into a tapestry of experience. It reflects who we are and our identities are shaped by it. Even when we’re alone, as I am while writing this column, we are being social. I am communicating with each of you and the things I am communicated are shaped by my own social experiences.

My point here is that being social is not something we turn on and off. We don’t go somewhere to be social. We are social. To reduce social complexity and try to contain it within an online ecosystem is a fool’s errand. Trying to support it with advertising just makes it worse. A revenue model based on advertising is self-limiting. It has always been a path of least resistance, which is why it’s so commonly used. It places no financial hurdles on the path to adoption. We have never had to pay money to use Facebook, or Instagram, or Snapchat. But we do pay with our privacy. And eventually, after the inevitable security breaches, we also lose our trust. That lack of trust limits the effectiveness of any social medium.

Of course, it’s not just social media that suffers from the trust issues that come with advertising-based revenue. This advertising driven path has worked up to now because trust was never really an issue. We took comfort in our perceived anonymity in the eyes of the marketer. We were part of a faceless, nameless mass market that traded attention for access to information and entertainment. Advertising works well with mass. As I mentioned, there are no obstacles to adoption. It was the easiest way to assemble the biggest possible audience. But we now market one to one. And as the ones on the receiving end, we are now increasingly seeking functionality. That is a fundamentally different precept. When we seek to do things, rather than passively consume content, we can no longer remain anonymous. We make choices, we go places, we buy stuff, we do things. In doing this, we leave indelible footprints which are easy to track and aggregate.

Our online and offline lives have now melded to the point where we need – and expect – something more than a collection of platforms offering fragmented functionality. What we need is a highly personalized OS, a foundational operating system that is intimately designed just for us and connects the dots of functionality. This is already happening in bits and pieces through the data we surrender when we participate in the online world. But that data lives in thousands of different walled gardens, including the social platforms we use. Then that data is used to target advertising to us. And we hate advertising. It’s a fundamentally flawed contract that we will – given a viable alternative – opt out of. We don’t trust the social platforms we use and we’re right not to. If we had any idea of depth or degree of personal information they have about us, we would be aghast.  I have said before that we are willing to trade privacy for functionality and I still believe this. But once our trust has been broken, we are less willing to surrender that private data, which is essential to the continued profitability of an ad-supported platform.

We need to own our own data. This isn’t so much to protect our privacy as it is to build a new trust contract that will allow that data to be used more effectively for our own purposes and not that of a corporation whose only motive is to increase their own profit. We need to remove the limits imposed by a flawed functionality offering based on serving ads that we don’t want to us. If we’re looking for the true disruptor in advertising, that’s it in nutshell.

 

A More Optimistic Side of Technology

In this column, I tend to zero in on different aspects of technology that bring negative consequences to we who are all too human. Frankly, it’s exhausting. I’m getting depressed by it. I’m sure you are too. So today, I’m trying something different – optimism.

I’m currently plowing through Steven Pinker’s Better Angels of our Nature. Plow is the operative word here. It’s 832 pages long. Pinker exhaustively supports his optimistic view of human kind with reams of empirical data. To save you many, many hours of reading, I’ll give you the gist in about 7 words: We’re better than we used to be. Yes, right now – today  – we humans are kinder, gentler, more moral, more peaceful, more rational  and more caring than we have been in our entire history. It’s a trend that can’t be denied. When it comes to our empathetic track record as a species, it’s been overwhelmingly up and to the right.

Pinker cites six driving trends, including the Humanitarian Revolution, the Civilization Process and the Pacification Process. I realized as I was reading that all of Pinker’s trends have been made possible – directly or indirectly – by the advance of technology. The more advanced our tools become, the more peaceful we become. That seems to bode well for the future. Sure, there are some anomalies. Two world wars come to mind. But if you keep a macro focus – which Pinker does –  the trend is undeniable. Our world is less “red in tooth and claw” than it used to be.

Let’s take the most obvious example. On July 16, 1945 we tested the most powerful weapon ever invented. If we wanted to kill people in unfathomable numbers, the nuclear bomb was the way to do it. In the words of Robert Oppenheimer, we had become “Death, the destroyer of worlds”

In the 73 years since then, this power has been used twice, both by the US in World War II. In that same time, deaths by all types of warfare worldwide have decreased dramatically. The technological ability to inflict death and our actual track record in doing so have gone in two opposite directions. Of course, it’s not just weapon’s technology that has improved dramatically. Technology has been advancing faster than ever on all fronts. And I think it’s dragged the “better angels of our nature” along with it.

Speaking from my own experience, I’m probably a better person than I was 40 years ago. I’m less prejudiced, more tolerant and more aware of all types of horrible human behavior. And it’s not just me. I still know plenty of bigots, but they’re not as bigoted as they were 40 years ago. I can’t think of anyone who has gone backwards in their social attitudes in that time. People of my parent’s generation used to say things back in the 1950’s and 60’s that are unacceptable today. If they do say it, they generally get an elbow in their side.

We tend to look on the dark side of things. Humans have been making apocalyptic predictions for as long as we’ve been human. Recently, technology has been increasingly fingered as the cause of our collective demise. But if you look at the evidence over the long term, the opposite has been true. We have been consistently improving our lot and ourselves thanks to technology. Technology not only increases our capabilities and connection but it also appears to increase our compassion. This correlation is not universal. It’s definitely not guaranteed. When we talk about collective behaviors, we have to aggregate and average. And when you do so, our behavior nets out to be gentler and kinder than it even has been before. I have to believe the advance of technology has a lot to do with it.

 

Why We No Longer Want to Know What’s True

“Truth isn’t truth” – Rudy Giuliani – August 19, 2018

Even without Giuliani’s bizarre statement, we’re developing a weird relationship with the truth. It’s becoming even more inconvenient. It’s certainly becoming more worrisome. I was chatting with a psychiatrist the other day who counsels seniors. I asked him if he was noticing more general anxiety in that generation – a feeling of helplessness with how the world seems to be going to hell in a handbasket. I asked him that because I am less optimistic about the future than I ever have been in my life. I wanted to know if that was unusual. He said it wasn’t – I had plenty of company.

You can pick the truth that is most unsettling. Personally, I lose sleep over climate change, the rise of populist politics and the resurgence of xenophobia. I have to limit the amount of news I consume in any day, because it sends me into a depressive state. I feel helpless. And as much as I’m limiting my intake because of my own mental health, I can’t help thinking that this is a dangerous path I’m heading down.

After doing a little research, I have found that things like PTSD (President Trump Stress Disorder) or TAD (Trump Anxiety Disorder) are real things. They’re recognized by the American Psychological Association. After a ten-year decline, anxiety levels in the US spiked dramatically after November, 2016.  Clinical psychologist Jennifer Panning, who coined TAD, says “the symptoms include feeling a loss of control and helplessness, and fretting about what’s happening in the country and spending excessive time on social media.”

But it’s not just the current political climate that’s causing anxiety. It’s also the climate itself. Enter “ecoanxiety.” Again…the APA in a recent paper nails a remarkably accurate diagnosis of how I’m feeling: “Gradual, long-term changes in climate can also surface a number of different emotions, including fear, anger, feelings of powerlessness, or exhaustion.”

“You can’t handle the truth” – Colonel Nathan R. Jessep (from the movie “A Few Good Men”)

So – when the truth scares the hell out of you – what do you do?  We can find a few clues in the quotes above. One is this idea of a loss of control. The other is spending excessive time on social media. My belief is that the later exacerbates the former.

In a sense, Rudy Giuliani is right. Truth isn’t truth, at least, not on the receiving end. We all interpret truth within the context of our own perceived reality. This in no way condones the manipulation of truth upstream from when it reaches us. We need to trust that our information sources are providing us the closest thing possible to a verifiable and objective view of truth.  But we have to accept the fact that for each of us, truth will ultimately be filtered through our own beliefs and understanding of what is real. Part of our own perceived reality is how in control we feel of the current situation. And this is where we begin to see the creeping levels of anxiety.

In 1954, psychologist Julian Rotter introduced the idea of a “locus of control” –the degree of control we believe we have over our own lives. For some of us, our locus is tipped to the internal side. We believe we are firmly at the wheel of our own lives. Others have an external locus, believing that life is left to forces beyond our control. But like most concepts in psychology, the locus of control is not a matter of black and white. It is a spectrum of varying shades of gray. And anxiety can arise when our view of reality seems to be beyond our own locus of control.

The word locus itself comes from the Latin for “place” or “location.” Typically, our control is exercised over those things that are physically close to us. And up until a 150 years ago, that worked well. We had little awareness of things beyond our own little world so we didn’t need to worry about them. But electronic media changed that. Suddenly, we were aware of wars, pestilence, poverty, famines and natural disasters from around the world. This made us part of Marshall McLuhan’s “Global Village.” The circle of our “locus of awareness” suddenly had to accommodate the entire world but our “locus of control” just couldn’t keep pace.

Even with this expansion of awareness, one could still say that truth remained relatively true. There was an editorial check and balance process that checked the veracity of the information we were presented. It certainly wasn’t perfect, but we could place some confidence in the truth of what we read, saw and heard.

And then came social media. Social media creates a nasty feedback loop when it comes to the truth. Once again, Dr. Panning typified these new anxieties as, “fretting about what’s happening in the country and spending excessive time on social media.” The algorithmic targeting of social media platforms means that you’re getting a filtered version of the truth. Facebook knows exactly what you’re most anxious about and feeds you a steady diet of content tailored specifically to those anxieties. We have the comfort of seeing posts from members of our network that seem to fear the same things we do and share the same beliefs. But the more time we spend seeking this comfort, the more we’re exposed to the anxiety inducing triggers and the further and further we drift from the truth. It creates a downward spiral that leads to these new types of environmental anxiety we are seeing. And to deal with those anxieties we’re developing new strategies for handling the truth – or, at least – our version of the truth. That’s where I’ll pick up next week.

 

Deconstructing the Google/Facebook Duopoly

We’ve all heard about it. The Google/Facebook Duopoly. This was what I was going to write about last week before I got sidetracked. I’m back on track now (or, at least, somewhat back on track). So let’s start by understanding what a duopoly is…

…a situation in which two suppliers dominate the market for a commodity or service.

And this, from Wikipedia…

… In practice, the term is also used where two firms have dominant control over a market.

So, to have a duopoly, you need two things: domination and control. First, let’s deal with the domination question. In 2017, Google and Facebook together took a very healthy 59% slice of all digital ad revenues in the US. Google captured 38.6% of that, with Facebook capturing 20%. That certainly seems dominant. But if online marketing is the market, that is a very large basket with a lot of different items thrown in. So, let’s do a broad categorization to help deconstruct this a bit.  Typically, when I try to understand marketing, I like to start with humans – or more specifically – what that lump of grey matter we call a brain is doing. And if we’re talking about marketing, we’re talking about attention – how our brains are engaging with our environment. That is an interesting way to divide up the market we’re talking about, because it neatly bisects the attentional market, with Google on one side and Facebook on the other.

Google dominates the top down, intent driven, attentionally focused market. If you’re part of this market, you have something in mind and you’re trying to find it. If we use search as a proxy for this attentional state (which is the best proxy I can think of) we see just how dominate Google is. It owns this market to a huge degree. According to Statista, Google has about 87% of the total worldwide search market in April of 2018. The key metric here is success. Google needs to be the best way to fulfill those searches. And if market share is any indication, it is.

Facebook apparently dominates the bottom up awareness market. These are the people killing time online and they are not actively looking with commercial intent. This is more of an awareness play where attention has to be diverted to an advertising message. Therefore, time spent becomes the key factor. You need to be in front of the right eyeballs, and so you need a lot of eyeballs and a way to target to the right ones.

Here is where things get interesting. If we look at share of consumer time, Google dominates here. But there is a huge caveat, which I’ll get to in a second. According to a report this spring by Pivotal Research, Google owns just under 28% of all the time we spend consuming digital content. Facebook has just over a 16% share of this market. So why do we have a duopoly and not a monopoly? It’s because of that caveat – a whopping slice of Google’s “time spent” dominance comes from YouTube. And YouTube has an entirely different attentional profile – one that’s much harder to present advertising against. When you’re watching a video on YouTube, your attention is “locked” on the video. Disrupting that attention erodes the user experience. So Google has had a tough time monetizing YouTube.

According to Seeking Alpha, Google’s search ad business will account for 68% of their total revenue of $77 billion this year. That’s over 52 billion dollars that is in that “top-down” attentionally focused bucket. YouTube, which is very much in the “bottom-up” disruptive bucket, accounts for $12 Billion in advertising revenues. Certainly nothing to sneeze at, but not on the same scale as Google’s search business. Facebook’s revenue, at about $36 B, is also generated by this same “bottom up” market, but they have a different attentional profile. The Facebook user is not as “locked in” as they are on YouTube. With the right targeting tools, something that Facebook has excelled at, you have a decent chance of gaining their attention long enough to notice your ad.

Domination

If we look at the second part of the definition of a duopoly – that of control – we see some potential chinks in the armor of both Google and Facebook. Typically, market control was in the form of physical constraints against the competition. But in this new type of market, the control can only be in the minds of the users. The barriers to competitive entry are all defined in mental terms.

In  Google’s case, they have a single line of defense: they have to be an unbreakable habit. Habits are mental scripts that depend on two things – obvious environmental cues that trigger habitual behavior and acceptable outcomes once the script completes. So, to maintain their habit, Google has to ensure that whatever environment you might be in when searching online for something, Google is just a click or two away. Additionally, they have to meet a certain threshold of success. Habits are tough to break, but there are two areas of vulnerability in Google’s dominance.

Facebook is a little different. They need to be addictive. This is a habit taken to the extreme. Addictions depend on pushing certain reward buttons in the brain that lead to an unhealthy behavioral script which become obsessive. The more addicted you are to Facebook and its properties, the more successful they will be in their dominance of the market. You can see the inherent contradiction here. Despite Facebook’s protests to the contrary, with their current revenue model they can only succeed at the expense of our mental health.

I find these things troubling. When you have two for-profit organizations fighting to dominate a market that is defined in our own minds, you have the potential for a lot of unhealthy corporate decisions.

 

Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.

 

 

Why The Paradox of Choice Doesn’t Apply to Netflix

A recent article in Mediapost reported that Millennials – and Baby Boomers for that matter – prefer broad choice platforms like Netflix and YouTube to channels specifically targeted to their demo. A recent survey found that almost 40% of respondents aged 18 – 24 used Netflix most often to view video content.

Author Wayne Friedman mused on the possibility that targeted channels might be a thing of the past: “This isn’t the mid-1990s. Perhaps audience segmentation into different networks — or separately branded, over-the-top digital video platforms  — is an old thing.”

It is. It’s aimed at an old world that existed before search filters. It was a world where Barry Schwartz’s Paradox of Choice was a thing. That’s not true in a world where we can quickly filter our choices.

Humans in almost every circumstance prefer the promise of abundance to scarcity. It’s how we’re hardwired. The variable here is our level of confidence in our ability to sort through the options available to us. If we feel confident that we can heuristically limit our choices to the most relevant ones, we will always forage in a richer environment.

In his book, Schwartz used the famous jam experiment of Sheena Iyengar to show how choice can paralyze us. Iyengar’s research team set up a booth with samples of jam in a gourmet food market. They alternated between a display of 6 jams and one of 24 options. They found that in terms of actually selling jams, the smaller display outperformed the larger one by a factor of 10 to 1. The study “raised the hypothesis that the presence of choice might be appealing as a theory,” Dr. Iyengar later said, “but in reality, people might find more and more choice to actually be debilitating.”

Yes, and no. What isn’t commonly cited is that in the study 60% of shoppers were drawn to the larger display, while only 40% were hooked by the smaller one. Yes, fewer bought, but that probably came down to a question of being able to filter, not the attraction of the display itself. Also, other researchers (Scheibehenne, Griefeneder and Todd, 2010) have ran into problems trying to verify the findings of the original study. They found that “on the basis of the data, no sufficient conditions could be identified that would lead to a reliable occurrence of choice overload.”

We all have a subconscious “foraging algorithm” that we use to sort through the various options in our environment. One of the biggest factors in this algorithm is the “cost of searching” – how much effort do we need to expend to find the thing we’re looking for? In today’s world, that breaks down into two variables: “finding” and “filtering.” A platform that’s rich in choice – like Netflix – virtually eliminates the cost of “finding.” We are confident that a platform that offers a massive number of choices will have something we will find interesting. So now it comes to “filtering.” If we feel confident enough in the filtering tools available to us, we will go with the richest environment available to us.  The higher our degree of confidence in our ability to “filter”, the less we will want our options limited for us.

So, when does it make sense to limit the options available to an audience? There are some conditions identified by Scheibehenne at al where the Paradox of Choice is more likely to happen:

Unstructured Choices – The harder it is to categorize the choices available, the more likely it is that it will be more difficult to filter those options.

Choices that are Hard to Compare to Each Other – If you’re comparing apples and oranges, either figuratively or literally, the cognitive load required to compare choices increases the difficulty.

The Complexity of Choices – The more information we have to juggle when we’re making a choice, the greater the likelihood that our brains may become overtaxed in trying to make a choice.

Time Pressure when Making a Choice – If you hear the theme song of Jeopardy when you’re trying to make a choice you’re more likely to become frustrated when trying to sort through a plethora of options.

If you are in the business of presenting options to customers, remember that the Paradox of Choice is not a hard and fast rule. In fact, the opposite is probably true – the greater the perception of choice, the more attractive it will be to them. The secret is in providing your customers the ability to filter quickly and efficiently.

 

The Pros and Cons of Slacktivism

Lately, I’ve grown to hate my Facebook feed. But I’m also morbidly fascinated by it. It fuels the fires of my discontent with a steady stream of posts about bone-headedness and sheer WTF behavior.

As it turns out, I’m not alone. Many of us are morally outraged by our social media feeds. But does all that righteous indignation lead to anything?

Last week, MediaPost reran a column talking about how good people can turn bad online by following the path of moral outrage to mob-based violence. Today I ask, is there a silver lining to this behavior? Can the digital tipping point become a force for good, pushing us to take action to right wrongs?

The Ever-Touchier Triggers of Moral Outrage

As I’ve written before, normal things don’t go viral. The more outrageous and morally reprehensible something is, the greater likelihood there is that it will be shared on social media. So the triggering forces of moral outrage are becoming more common and more exaggerated. A study found that in our typical lives, only about 5% of the things we experience are immoral in nature.

But our social media feeds are algorithmically loaded to ensure we are constantly ticked off. This isn’t normal. Nor is it healthy.

The Dropping Cost of Being Outraged

So what do we do when outraged? As it turns out, not much — at least, not when we’re on Facebook.

Yale neuroscientist Molly Crockett studies the emerging world on online morality. And she found that the personal costs associated with expressing moral outrage are dropping as we move our protests online:

“Offline, people can harm wrongdoers’ reputations through gossip, or directly confront them with verbal sanctions or physical aggression. The latter two methods require more effort and also carry potential physical risks for the punisher. In contrast, people can express outrage online with just a few keystrokes, from the comfort of their bedrooms…”

What Crockett is describing is called slacktivism.

You May Be a Slacktivist if…

A slacktivist, according to Urbandictionary.com, is “one who vigorously posts political propaganda and petitions in an effort to affect change in the world without leaving the comfort of the computer screen”

If your Facebook feed is at all like mine, it’s probably become choked with numerous examples of slacktivism. It seems like the world has become a more moral — albeit heavily biased — place. This should be a good thing, shouldn’t it?

Warning: Outrage Can be Addictive

The problem is that morality moves online, it loses a lot of the social clout it has historically had to modify behaviors. Crockett explains:

“When outrage expression moves online it becomes more readily available, requires less effort, and is reinforced on a schedule that maximizes the likelihood of future outrage expression in ways that might divorce the feeling of outrage from its behavioral expression…”

In other words, outrage can become addictive. It’s easier to become outraged if it has no consequences for us, is divorced by the normal societal checks and balances that govern our behavior and we can get a nice little ego boost when others “like” or “share” our indignant rants. The last point is particularly true given the “echo chamber” characteristics of our social-media bubbles. These are all the prerequisites required to foster habitual behavior.

Outrage Locked Inside its own Echo Chamber

Another thing we have to realize about showing our outrage online is that it’s largely a pointless exercise. We are simply preaching to the choir. As Crockett points out:

“Ideological segregation online prevents the targets of outrage from receiving messages that could induce them (and like-minded others) to change their behavior. For politicized issues, moral disapproval ricochets within echo chambers but only occasionally escapes.”

If we are hoping to change anyone’s behavior by publicly shaming them, we have to realize that Facebook’s algorithms make this highly unlikely.

Still, the question remains: Does all this online indignation serve a useful purpose? Does it push us to action?

The answer seems to be dependent on two factors, both imposing their own thresholds on our likelihood to act. One is if we’re truly outraged or not. Because showing outrage online is so easy, with few consequences and the potential social reward of a post going viral, it has all the earmarks of a habit-forming behavior. Are we posting because we’re truly mad, or just bored?

“Just as a habitual snacker eats without feeling hungry, a habitual online shamer might express outrage without actually feeling outraged,” writes Crockett.

Moving from online outrage to physical action — whether it’s changing our own behavior or acting to influence a change in someone else – requires a much bigger personal investment on almost every level. This brings us to the second threshold factor: our own personal experiences and situation. Millions of women upped the ante by actively supporting #Metoo because it was intensely personal for them. It’s one example of an online movement that became one of the most potent political forces in recent memory.

One thing does appear to be true. When it comes to social protest, there is definitely more noise out there. We just need a reliable way to convert that to action.

Why Do Good People Become Bad Online?

Here are some questions I have:

  • When do crowds turn ugly?
  • Why do people become Trolls online?
  • When do opinions suddenly become moralizing and what is the difference between the two?

Whether we like it or not, online connection engenders some decidedly bad behavior. It’s one of those unintended consequences that I like to talk about – a behavioral side effect that’s catalyzed by technology.  And, if this is the case, we should know a little more about the psychology behind this behavior.

Modified Mob Behavior

So, when does a group become a mob? And when does a mob turn ugly? There are some aspects of herd mentality that seem to be particularly conducive to online connections. A group turns into a mob when their behaviors become synced to a common purpose. A recent study from the University of Southern California found two predictive signals in social media behavior that indicate when a group protest may become a violent mob.

Tipping Over the Threshold from an Opinion to a Moral

One of the things they found is that when we go from talking about our opinions to preaching morality, things can take a nasty turn. Let’s imagine a spectrum from loosely held opinions on the left end – things you’re not that emotionally invested in – to beliefs and then on the morals at the right end. This progression also correlates to different ways the brain processes the respective thoughts. At the least intense left end of the spectrum – opinions – we can process them with relative detached rationality. But as we move to the right, different parts of the brain start kicking in and begin to raise the emotional stakes. When we believe we’re talking about morals, we suddenly have strongly held beliefs about what is right and what is wrong. Morals are defined as “concerned with the principles of right and wrong behavior and the goodness or badness of human character.”

This triggers our ancient and universal feelings about fairness, Harm, betrayal, subversion and degradation – the planks of moral foundation theory. The researchers in the USC study found that people are more likely to endorse violence when they moralize the issue. When there are clearly held beliefs about right and wrong, violence seems acceptable.

Violence Needs Company

This moralizing signal is not necessarily tied to being online. But the second predictive signal is. The researchers also found that if people believe others share their views, they are more likely to tip over the threshold from peaceful protest to violence. This is Mark Granovetter’s crowd threshold effect that I’ve talked about before. In social media, this effect is amplified by content filtering and the structure of your network. Like-minded people naturally link to each other and their posts make for remarkably efficient indicators of their beliefs. It’s very easy in a social network to feel that everybody you know feels the same way that you do. The degree of violent language can escalate quickly through online posts until the entire group is pushed over the threshold into a model of behavior that would be unthinkable as a disconnected individual.

Trolls, Trolls Everywhere

Another study, this time from Stanford, shows that any of us can become a troll. We would like to think that trolls are just a particularly ubiquitous small group of horrible people. But this research indicates that trollism is more situational than previously thought. In other words, if we’re in a bad mood, we’re more likely to become a troll.

But it’s not just our mood. Here again Granovetter’s threshold model plays a part. Negative comments beget more negative comments, starting a downward spiral of venom. The researchers did a behavioral test where participants had to do either an easy or a difficult task and then had to read an online article that had either three neutral comments or three negative, troll-like comments. The results were eye-opening. In the group that was assigned an easy task and read the article that had the neutral comments, about 35% posted a negative comment. Knowing that one in three of us seem to have a low threshold for becoming a troll is not exactly encouraging, but it gets worse. If participants either did the difficult test or read negative comments, the likelihood for posting a troll-like comment jumped to 50%. And if participants got both the difficult test and read negative comments, the number climbed to 68%! In the three-part study, another factor that could lead to becoming a troll included the time that posts were made. Late Sunday and Monday nights are the worst time of the week for negative posts and Twitter bullying hit its peak between 5 pm and 8 pm on Sunday. While we’re on the subject, Donald Trump tends to tweet early in the morning and his most inflammatory tweets come on Saturdays.

But when it comes to trolling, there’s something else at play here as well. Yet another study, this time from Mt. Helen University in Australia, found that our own brand of empathy can also predict whether we’re going to become a troll or not. There is cognitive and affective empathy. Cognitive empathy means you can understand other people’s emotions – you know what will make them happy or mad. But affective empathy means you can internalize and experience the emotions of another – if they’re happy, you’re happy. If they’re mad, you’re mad. Not surprisingly, Trolls tend to have high cognitive empathy but low affective empathy. Obviously, there were plenty of such people before the Internet, but they’ve now gained the perfect forum for their twisted form of empathy. They can incite negativity relatively free from social consequence and reprisal. Even if the comments made are not anonymous, the poster can hide behind a degree of detachment that would be impossible in a physical environment.

So, why should we care? Again, it comes back to this idea of the unintended social consequences of technology. Increasingly, our connections are digital in nature. And for reasons already stated, I worry that these types of connections may bring out the worst in us.

 

Is Live the New Live?

HQ Trivia – the popular mobile game app –  seems to be going backwards. It’s an anachronism – going against all the things that technology promises. It tethers us to a schedule. It’s essentially a live game show broadcast (when everything works as it should, which is far from a sure bet) on a tiny screen – It also gets about a million players each and every time it plays, which is usually only twice a day.

My question is: Why the hell is it so popular?

Maybe it’s the Trivia Itself…

(Trivial Interlude – the word trivia comes from the Latin for the place where three roads come together. Originally in Latin it was used to refer to the three foundations of basic education – grammar, logic and rhetoric. The modern usage came from a book by Logan Pearsall Smith in 1902 – “Trivialities, bits of information of little consequence”. The singular of trivia is trivium)

As a spermologist (that’s a person who loves trivia – seriously – apparently the “sperm” has something to do with “seeds of knowledge”) I love a trivia contest. It’s one thing I’m pretty good at – knowing a little about a lot of things that have absolutely no importance. And if you too fancy yourself a spermologist (which, by the way, is how you should introduce yourself at social gatherings) you know that we always want to prove we’re the smartest people in the room. In HQ Trivia’s case, that room usually holds about a million people. That’s the current number of participants in the average broadcast. So the odds of being the smartest person is the room is – well – about one in a million. And a spermologist just can’t resist those odds.

But I don’t think HQ’s popularity is based on some alpha-spermology complex. A simple list of rankings would take care of that. No, there must be more to it. Let’s dig deeper.

Maybe it’s the Simoleons…

(Trivial Interlude: Simoleons is sometimes used as slang for American dollars, as Jimmy Stewart did in “It’s a Wonderful Life.” The word could be a portmanteau of “simon” and “Napoleon” – which was a 20 franc coin issued in France. The term seems to have originated in New Orleans, where French currency was in common use at the turn of the last century.)

HQ Trivia does offer up cash for smarts. Each contest has a prize, which is usually $5000. But even if you make it through all 12 questions and win, by the time the prize is divvied up amongst the survivors, you’ll probably walk away with barely enough money to buy a beer. Maybe two. So I don’t think it’s the prize money that accounts for the popularity of HQ Trivia.

Maybe It’s Because it’s Live..

(Trivial Interlude – As a Canadian, Trivia is near and dear to my heart. America’s favorite trivia quiz master, Alex Trebek, is Canadian, born in Sudbury, Ontario. Alex is actually his middle name. George is his first name. He is 77 years old. And Trivial Pursuit, the game that made trivia a household name in the 80’s, was invented by two Canadians, Chris Haney and Scott Abbott. It was created after the pair wanted to play Scrabble but found their game was missing some tiles. So they decided to create their own game. In 1984, more than 20 million copies of the game were sold. )

There is just something about reality in real time. Somehow, subconsciously, it makes us feel connected to something that is bigger than ourselves. And we like that. In fact, one of the other etymological roots of the word “trivia” itself is a “public place.”

The Hotchkiss Movie Choir Effect

If you want to choke up a Hotchkiss (or at least the ones I’m personally familiar with) just show us a movie where people spontaneously start singing together. I don’t care if it’s Pitch Perfect Twelve and a Half – we’ll still mist up. I never understood why, but I think it has to do with the same underlying appeal of connection. Dan Levitin, author of “This is Your Brain on Music,” explained what happens in our brain when we sing as part of a group in a recent interview on NPR:

“We’ve got to pay attention to what someone else is doing, coordinate our actions with theirs, and it really does pull us out of ourselves. And all of that activates a part of the frontal cortex that’s responsible for how you see yourself in the world, and whether you see yourself as part of a group or alone. And this is a powerful effect.”

The same thing goes for flash mobs. I’m thinking there has to be some type of psychological common denominator that HQ Trivia has somehow tapped into. It’s like a trivia-based flash mob. Even when things go wrong, which they do quite frequently, we feel that we’re going through it together. Host Scott Rogowsky embraces the glitchiness of the platform and commiserates with us. Misery – even when it’s trivial – loves company.

Whatever the reason for its popularity, HQ Trivia seems to be moving forward by taking us back to a time when we all managed to play nicely together.

 

Advertising Meets its Slippery Slope

We’ve now reached the crux of the matter when it comes to the ad biz.

For a couple of centuries now, we’ve been refining the process of advertising. The goal has always been to get people to buy stuff. But right now, there is now a perfect storm of forces converging that requires some deep navel gazing on the part of us insiders.

It used to be that to get people to buy, all we had to do was inform. Pent up consumer demand created by expanding markets and new product introductions would take care of the rest. We just had to connect better the better mousetraps with the world, which would then duly beat the path to the respective door.  Advertising equaled awareness.

But sometime in the waning days of the consumer orgy that followed World War Two, we changed our mandate. Not content with simply informing, we decided to become influencers. We slipped under the surface of the brain, moving from providing information for rational consideration to priming subconscious needs. We started messing with the wiring of our market’s emotional motivations.  We became persuaders.

Persuasion is like a mental iceberg – 90% of the bulk lies below the surface. Rationalization is typically the hastily added layer of ad hoc logic that happens after the decision is already made.  This is true to varying degrees for almost any consumer category you can think including – unfortunately – our political choices.

This is why, a few columns ago – I said Facebook’s current model is unsustainable. It is based on advertising, and I think advertising may have become unsustainable. The truth is, advertisers have gotten so good at persuading us to do things that we are beginning to revolt. It’s getting just too creepy.

To understand how we got here, let’s break down persuasion. It requires the persuader to shift the beliefs of the persuadee. The bigger the shift required, the tougher the job of persuasion.  We tend to build irrational (aka emotional) bulwarks around our beliefs to preserve them. For this reason, it’s tremendously beneficial to the persuader to understand the belief structure of their target. If they can do this, they can focus on those whose belief structure is most conducive to the shift required.

When it comes to advertisers, the needle on our creative powers of persuasion hasn’t really moved that much in the last half century. There were very persuasive ads created in the 1960’s and there are still great ads being created. The disruption that has moved our industry to the brink of the slippery slope has all happened on the targeting end.

The world we used to live in was a bunch of walled and mostly unconnected physical gardens. Within each, we would have relevant beliefs but they would remain essentially private. You could probably predict with reasonable accuracy the religious beliefs of the members of a local church. But that wouldn’t help you if you were wondering whether the congregation leaned towards Ford or Chevy.  Our beliefs lived inside us, typically unspoken and unmonitored.

That all changed when we created digital mirrors of ourselves through Facebook, Twitter, Google and all the other usual suspects. John Battelle, author of The Search,  once called Google the Database of Intentions. It is certainly that. But our intent also provides an insight into our beliefs. And when it comes to Facebook, we literally map out our entire previously private belief structure for the world to see. That is why Big Data is so potentially invasive. We are opening ourselves up to subconscious manipulation of our beliefs by anyone with the right budget. We are kidding ourselves if we believe ourselves immune to the potential abuse that comes with that. Like I said, 90% of our beliefs are submerged in our subconscious.

We are just beginning to realize how effective the new tools of persuasion are. And as we do so, we are beginning to feel that this is all very unfair. No one likes being manipulated; even if they have willing laid the groundwork for that manipulation. Our sense of retroactive justice kicks in. We post rationalize and point fingers. We blame Facebook, or the government, or some hackers in Russia. But these are all just participants in a new eco-system that we have helped build. The problem is not the players. The problem is the system.

It’s taken a long time, but advertising might just have gotten to the point where it works too well.