The Psychology Behind My NetFlix Watchlist

I live in Canada – which means I’m going into hibernation for the next 5 months. People tell me I should take up a winter activity. I tell them I have one. Bitching. About winter – specifically. You have your hobbies – and I have mine.

The other thing I do in the winter is watch movies. And being a with it, tech-savvy guy, I have cut the cord and get my movie fix through not one, but three streaming services: Netflix, Amazon Prime and Crave (a Canadian service). I’ve discovered that the psychology of Netflix is fascinating. It’s the Paradox of Choice playing out in streaming time. It’s the difference between what we say we do and what we actually do.

For example, I do have a watch list. It has somewhere around a hundred items on it. I’ll probably end up watching about 20% of them. The rest will eventually go gentle into that good Netflix Night. And according to a recent post on Digg, I’m actually doing quite well. According to the admittedly small sample chronicled there, the average completion rate is somewhere between 5 and 15%.

When it comes to compiling viewing choices, I’m an optimizer. And I’m being kind to myself. Others, less kind, refer to it as obsessive behavior. This is referring to satisficing/optimizing spectrum of decision making. I put an irrational amount of energy into the rationalization of my viewing options. The more effort you put into decision making, the closer you are to the optimizing end of the spectrum. If you make choices quickly and with your gut, you’re a satisficer.

What is interesting about Netflix is that it defers the Paradox of Choice. I dealt with this in a previous column. But I admit I’m having second thoughts. Netflix’s watch list provides us with a sort of choosing purgatory..a middle ground where we can save according to the type of watcher we think we are. It’s here where the psychology gets interesting. But before we go there, let’s explore some basic psychological principles that underpin this Netflix paradox of choice.

Of Marshmallows and Will Power

In the 1960’s, Walter Mischel and his colleagues conducted the now famous Marshmallow Test, a longitudinal study that spanned several years. The finding (which currently is in some doubt) was that children who had – when they were quite young – the willpower to resist immediately taking a treat (the marshmallow) put in front of them in return for a promise of a greater treat (two marshmallows)  in 15 minutes would later do substantially better in many aspects of their lives (education, careers, social connections, their health). Without getting into the controversial aspects of the test, let’s just focus on the role of willpower in decision making.

Mischel talks about a hot and cool system of making decisions that involve self-gratification. The “hot” is our emotions and the “cool” is our logic. We all have different set-points in the balance between hot and cool, but where these set points are in each of us depends on will power. The more willpower we have, the more likely it is that we’ll delay an immediate reward in return for a greater reward sometime in the future.

Our ability to rationalize and expend cognitive resources on a decision is directly tied to our willpower. And experts have learned that our will power is a finite resource. The more we use it in a day, the less we have in reserve. Psychologists call this “ego-depletion” And a loss of will power leads to decision fatigue. The more tired we become, the less our brain is willing to work on the decisions we make. In one particularly interesting example, parole boards are much more likely to let prisoners go either first thing in the morning or right after lunch than they are as the day wears on. Making the decision to grant a prisoner his or her freedom is a decision that involves risk. It requires more thought.  Keeping them in prison is a default decision that – cognitively speaking – is a much easier choice.

Netflix and Me: Take Two

Let me now try to rope all this in and apply it to my Netflix viewing choices. When I add something to my watch list, I am making a risk-free decision. I am not committing to watch the movie now. Cognitively, it costs me nothing to hit the little plus icon. Because it’s risk free, I tend to be somewhat aspirational in my entertainment foraging. I add foreign films, documentaries, old classics, independent films and – just to leaven out my selection – the latest audience-friendly blockbusters. When it comes to my watch list additions, I’m pretty eclectic.

Eventually, however, I will come back to this watch list and will actually have to commit 2 hours to watching something. And my choices are very much affected by decision fatigue. When it comes to instant gratification, a blockbuster is an easy choice. It will have lots of action, recognizable and likeable stars, a non-mentally-taxing script – let’s call it the cinematic equivalent of a marshmallow that I can eat right away. All my other watch list choices will probably be more gratifying in the long run, but more mentally taxing in the short term. Am I really in the mood for a European art-house flick? The answer probably depends on my current “ego-depletion” level.

This entire mental framework presents its own paradox of choice to me every time I browse through my watchlist. I know I have previously said the Paradox of Choice isn’t a thing when it comes to Netflix. But I may have changed my mind. I think it depends on what resources we’re allocating. In Barry Schwartz’s book titled the Paradox of Choice, he cites Sheena Iyengar’s famous jam experiment. In that instance, the resource was the cost of jam. In that instance, the resource was the cost of jam. But if we’re talking about 2 hours of my time – at the end of a long day – I have to confess that I struggle with choice, even when it’s already been short listed to a pre-selected list of potential entertainment choices. I find myself defaulting to what seems like a safe choice – a well-known Hollywood movie – only to be disappointed when the credits roll. When I do have the will power to forego the obvious and take a chance on one of my more obscure picks, I’m usually grateful I did.

And yes, I did write an entire column on picking a movie to watch on Netflix. Like I said, it’s winter and I had a lot of time to kill.

 

It’s Not Whether We Like Advertising – It’s Whether We Accept Advertising

Last week, I said we didn’t like advertising. That – admittedly – was a blanket statement.

In response, MediaPost reader Kevin VanGundy said:

“I’ve been in advertising for 39 years and I think the premise that people don’t like advertising is wrong. People don’t like bad advertising.”

I think there’s truth in both statements. The problem here is the verb I chose to use: “like.” The future of advertising hangs not on what we like, but on what we accept. Like is an afterthought. By the time we decide whether we like something or not, we’ve already been exposed to it. It’s whether we open the door to that exposure that will determine the future of advertising. So let’s dig a little deeper there, shall we?

First, seeing as we started with a blanket statement, let’s spend a little time unpacking this idea of “liking” advertising. As Mr. VanGundy agreed, we don’t like bad advertising. The problem is that most advertising is bad, in that it’s not really that relevant to us “in the moment.” Even with the best programmatic algorithms currently being used, the vast majority of the targeted advertising presented to me is off the mark. It’s irrelevant, it’s interruptive and that makes it irritating.

Let’s explore how the brain responds to this. Our brains love to categorize and label, based on our past experience. It’s the only way we can sort through and process the tsunami of input we get presented with on a daily basis. So, just like my opening sentence, the brain makes blanket statements. It doesn’t deal with nuance very well, at least in the subconscious processing of stimuli. It quickly categorizes into big generic buckets and sorts the input, discarding most of it as unworthy of attention and picking the few items of interest out of the mix. In this way, our past experience predicts our future behavior, in terms of what we pay attention to. And if we broadly categorize advertising as irritating, this will lessen the amount of attention we’re willing to pay.

As a thought experiment to support my point, think of what you would do if you were to click on a news story in the Google results and when you arrive at the article page, you get the pop up informing you that you had your ad-blocker on. You have been given two options: whitelist the page so you receive advertising or keep your ad-blocker on and read the page anyway. I’m betting you would keep your ad-blocker on. It’s because you were given a choice and that choice included the option to avoid advertising – which you did because advertising annoys you.

To further understand why the exchange that forms the foundation of advertising is crumbling, we have to understand that much of the attentional focused activity in the brain is governed by a heuristic algorithm that is constantly calculating trade-offs between resources and reward. It governs our cognitive resources by predicting what would have to be invested versus what the potential reward might be. This subconscious algorithm tends to be focused on the task at hand. Anything that gets in the way of the contemplated task is an uncalculated investment of resources. And the algorithm is governed by our past experience and broad categorizations. It you have categorized advertising as “bad” the brain will quickly cut that category out of consideration. The investment of attention is not warranted given the expected reward. If you did happen to be served a “good” ad that managed to make it into consideration – based on an exception to our general categorization that advertising is annoying – that can change, but the odds are stacked against it. It’s just that low probability occurrence that the entire ad industry is built on.

Finally, let’s look at that probability. In the past, the probability was high enough to warrant the investment of ad dollars. The probability was higher because our choices were fewer. Often, we only had one path to get to what we sought, and that path lead through an ad. The brain had no other available options. That’s no longer the case. Let’s go back to our ad-blocker example.

Let’s say the pop-up didn’t give us a choice – we had to whitelist to see the article. The resource – reward algorithm kicks into action: What are the odds we could find the information – ad-free –  elsewhere? How important is the information to us? Will we ever want to come back to this site to read another article? Perhaps we give in and whitelist. Or perhaps we just abandon the site with a sour taste in our mouth. The later was happening more and more, which is why we see fewer news sites offering the whitelist or nothing option now. The probability of our market seeing an ad is dropping because they have more ad-free alternatives. Or at least, they think they do.

And it’s this thought – precisely this thought – that is eroding the foundation of advertising, whether we like it or not.

 

Why Marketing is Increasingly Polarizing Everything

 

Trump. Kanye. Kaepernick. Miracle Whip.

What do these things all have in common? They’re polarizing. Just the mention of them probably stirs up strong feelings in you, one way or the other.

Wait. Miracle Whip?

Yep. Whether you love or hate Miracle Whip is perhaps the defining debate of our decade.

Okay, maybe not. But it turns out that Miracle Whip – which I always thought of as the condiment counterpart to vanilla – is a polarized brand, according to an article in the Harvard Business Review.  And far from being aghast at the thought, Kraft Foods, the maker of Miracle Whip, embraced the polarization with gusto. They embedded it in their marketing.

I have to ask – when did it become a bad thing to be vanilla? I happen to like vanilla. But I always order something else. And there’s the rub. Vanilla is almost never our first choice, because we don’t like to be perceived as boring.

Boring is the kiss of death for marketing. So even Miracle Whip, which is literally “boring” in a jar, is trying to “whip” up some controversy. Our country is being split down the middle and driven to either side – shoved to margins of outlier territory. Outrageous is not only acceptable. It’s become desirable. And marketing is partly to blame.

We marketers are enamored with this idea of “viralness.” We want advertising to be amplified through our target customer’s social networks. Boring never gets retweeted or shared. We need to be jolted out of those information filters we have set on high alert. That’s why polarization works. By moving to extremes, brands catch our attention. And as they move to extremes, they drag us along with them. Increasingly, the brands we chose as our own identifying badges are moving away from any type of common middle ground. Advertising is creating a nation of ideological tribes that have an ever-increasing divide separating them.

The problem is that polarization works. Look at Nike. As Sarah Mahoney recently documented in a Mediapost article, the Colin Kaepernick campaign turned some impressive numbers for Nike. Research from Kantar Millward Brown found these ads were particularly effective in piercing our ennui. The surprising part is that it did it on both sides of the divide. Based on Kantar’s Link evaluation, the ad scored in the top 15% of ads on something called “Power Contribution.” According to Kantar, that’s the ad’s “potential to impact long-term equity.” If we strip away the “market-speak” from this, that basically means the Kaepernick ads make them an excellent tribal badge to rally around.

If you’re a marketer, it’s hard to argue with those numbers. And Is it really important if half the world loves a brand, and the other half hates it? I suspect it is. The problem comes when we look at exactly the same thing Kantar’s Link Evaluation measures – what is the intensity of feeling you have towards a brand? The more intense the feeling, the less rational you are. And if the object of your affection lies in outlier territory – those emotions can become highly confrontational towards those on the other side of the divide. Suddenly, opinions become morals, and there is no faster path to hate than embracing a polarized perspective on morality. The more that emotionally charged marketing pushes us towards the edges, the harder it is to respect opinions that are opposed to our own. This embracing of polarization in non-important areas – like which running shoes you choose to wear – increases polarization in other areas where it’s much more dangerous. Like politics.

As if we haven’t seen enough evidence of this lately, polarized politics can cripple a country. In a recent interview on NPR, Georgia State political science professor Jennifer McCoy listed three possible outcomes from polarization. First, the country can enter polarization gridlock, where nothing can get done because there is a complete lack of trust between opposing parties. Secondly, a polarization pendulum can occur, where power swings back and forth between the two sides and most of the political energy is expended undoing the initiatives of the previous government. Often there is little logic to this, other than the fact that the initiatives were started by “them” and not “us.” Finally, one side can find a way to stay in power and then actively work to diminish and vanquish the other side by dismantling democratic platforms.

Today, as you vote, you’ll see ample evidence of the polarization of America. You’ll also see that at least two of the three outcomes of polarization are already playing out. We marketers just have to remember that while we love it when a polarized brand goes viral, there may be another one of those intended consequences lurking in the background.

 

 

Avoiding the Truth: Dodging Reality through Social Media

“It’s very hard to imagine that the revival of Flat Earth theories could have happened without the internet.”

Keith Kahn-Harris – Sociologist and author of “Denial: The Unspeakable Truth” – in an interview on CBC Radio

On November 9, 2017, 400 people got together in Raleigh, North Carolina. They all believe the earth is flat. This November 15th and 16th, they will do it again in Denver, Colorado. If you are so inclined, you could even join other flat earthers for a cruise in 2019. The Flat Earth Society is a real thing. They have their own website. And – of course – they have their own Facebook page (actually, there seems to be a few pages. Apparently, there are Flat Earth Factions.)

Perhaps the most troubling thing is this: it isn’t a joke. These people really believe the earth is flat.

How can this happen in 2018? For the answer, we have to look inwards – and backwards – to discover a troubling fact about ourselves. We’re predisposed to believe stuff that isn’t true. And, as Mr. Kahn-Harris points out, this can become dangerous when we add an obsessively large dose of time spent online, particularly with social media.

It makes sense that there was an evolutionary advantage to a group of people who lived in the same area and dealt with the same environmental challenges to have the same basic understanding about things. These commonly held beliefs allowed group learnings to be passed down to the individual: eating those red berries would make you sick, wandering alone in the savannah was not a good idea, coveting thy neighbor’s wife might get you stabbed in the middle of the night. Our beliefs often saved our ass.

Because of this, it was in our interest to protect our beliefs. They formed part of our “fast” reasoning loop, not requiring our brain to kick in to do any processing. Cognitive scientists refer to this as “fluency”.  Our brains have evolved to be lazy. If they don’t have to work, they don’t. And in the adaptive environment we evolved in – for reasons already stated – this cognitive short cut generally worked to our benefit. Ask anyone who has had to surrender a long-held belief. It’s tough to do. Overturning a belief requires a lot of cognitive horsepower. It’s far easier to protect them with a scaffolding of supporting “facts” – no matter how shaky it may be.

Enter the Internet. And the usual suspect? Social media.

As I said last week, the truth is often hard to handle – especially if it runs head long into our beliefs. I don’t want to believe in climate change because the consequences of that truth are mind-numbingly frightening. But I find I’m forced to. I also don’t believe the earth is flat. For me, in both cases, the evidence is undeniable. That’s me, however. There are plenty of people who don’t believe climate change is real and – according to the Facebook Official Flat Earth Discussion group – there are at least 107,372 people that believe the earth is flat. The same evidence is also available to them. Why are we different?

When it comes to our belief structure, we all have different mindsets, plotted on a spectrum of credulity. I’m what you may call a scientific skeptic. I tend not to believe something is true unless I see empirical evidence supporting it. There are others who tend to believe in things at a much lower threshold. And this tendency is often found across multiple domains. The mindset that embraces creationism, for example, has been shown to also embrace conspiracy theories.

In the pre-digital world, our beliefs were a feature, not a bug. When we shared a physical space with others, we also relied on a shared “mind-space” that served us well. Common beliefs created a more cohesive social herd and were typically proven out over time against the reality of our environment. Beneficial beliefs were passed along and would become more popular, while non-beneficial beliefs were culled from the pack. It was the cognitive equivalent of Adam Smith’s “Invisible hand.” We created a belief marketplace.

Beliefs are moderated socially. The more unpopular our own personal beliefs, the more pressure there is to abandon them. There is a tipping point mechanism at work here. Again, in a physically defined social group, those whose mindsets tend to look for objective proof will be the first to abandon a belief that is obviously untrue. From this point forward, social contagion can be more effective factor in helping the new perspective spread through a population than the actual evidence. “What is true?” is not as important as “what does my neighbor believe to be true?”

This is where social media comes in. On Facebook, a community is defined in the mind, not in any particular physical space. Proximity becomes irrelevant. Online, we can always find others that believe in the same things we do. A Flat Earther can find comfort by going on a cruise with hundreds of other Flat Earthers and saying that a 107,372 people can’t be wrong. They can even point to “scientific” evidence proving their case. For example, if the earth wasn’t flat, a jetliner would have to continually point its nose down to keep from flying off into space (granted, this argument conveniently ignores gravity and all types of other physics, but why quibble).

Social media provides a progressive banquet of options for dealing with unpleasant truths. Probably the most benign of these is something I wrote about a few weeks back – slacktivism. At least slacktivisits acknowledge the truth. From there, you can progress to a filtering of facts (only acknowledging the truths you can handle), wilful ignorance (purposely avoiding the truth), denialism (rejecting the truth) and full out fantasizing (manufacturing an alternate set of facts). Examples of all these abound on social media.

In fact, the only thing that seems hard to find on Facebook is the bare, unfiltered, unaltered truth. And that’s probably because we’re not looking for it.

 

Why We No Longer Want to Know What’s True

“Truth isn’t truth” – Rudy Giuliani – August 19, 2018

Even without Giuliani’s bizarre statement, we’re developing a weird relationship with the truth. It’s becoming even more inconvenient. It’s certainly becoming more worrisome. I was chatting with a psychiatrist the other day who counsels seniors. I asked him if he was noticing more general anxiety in that generation – a feeling of helplessness with how the world seems to be going to hell in a handbasket. I asked him that because I am less optimistic about the future than I ever have been in my life. I wanted to know if that was unusual. He said it wasn’t – I had plenty of company.

You can pick the truth that is most unsettling. Personally, I lose sleep over climate change, the rise of populist politics and the resurgence of xenophobia. I have to limit the amount of news I consume in any day, because it sends me into a depressive state. I feel helpless. And as much as I’m limiting my intake because of my own mental health, I can’t help thinking that this is a dangerous path I’m heading down.

After doing a little research, I have found that things like PTSD (President Trump Stress Disorder) or TAD (Trump Anxiety Disorder) are real things. They’re recognized by the American Psychological Association. After a ten-year decline, anxiety levels in the US spiked dramatically after November, 2016.  Clinical psychologist Jennifer Panning, who coined TAD, says “the symptoms include feeling a loss of control and helplessness, and fretting about what’s happening in the country and spending excessive time on social media.”

But it’s not just the current political climate that’s causing anxiety. It’s also the climate itself. Enter “ecoanxiety.” Again…the APA in a recent paper nails a remarkably accurate diagnosis of how I’m feeling: “Gradual, long-term changes in climate can also surface a number of different emotions, including fear, anger, feelings of powerlessness, or exhaustion.”

“You can’t handle the truth” – Colonel Nathan R. Jessep (from the movie “A Few Good Men”)

So – when the truth scares the hell out of you – what do you do?  We can find a few clues in the quotes above. One is this idea of a loss of control. The other is spending excessive time on social media. My belief is that the later exacerbates the former.

In a sense, Rudy Giuliani is right. Truth isn’t truth, at least, not on the receiving end. We all interpret truth within the context of our own perceived reality. This in no way condones the manipulation of truth upstream from when it reaches us. We need to trust that our information sources are providing us the closest thing possible to a verifiable and objective view of truth.  But we have to accept the fact that for each of us, truth will ultimately be filtered through our own beliefs and understanding of what is real. Part of our own perceived reality is how in control we feel of the current situation. And this is where we begin to see the creeping levels of anxiety.

In 1954, psychologist Julian Rotter introduced the idea of a “locus of control” –the degree of control we believe we have over our own lives. For some of us, our locus is tipped to the internal side. We believe we are firmly at the wheel of our own lives. Others have an external locus, believing that life is left to forces beyond our control. But like most concepts in psychology, the locus of control is not a matter of black and white. It is a spectrum of varying shades of gray. And anxiety can arise when our view of reality seems to be beyond our own locus of control.

The word locus itself comes from the Latin for “place” or “location.” Typically, our control is exercised over those things that are physically close to us. And up until a 150 years ago, that worked well. We had little awareness of things beyond our own little world so we didn’t need to worry about them. But electronic media changed that. Suddenly, we were aware of wars, pestilence, poverty, famines and natural disasters from around the world. This made us part of Marshall McLuhan’s “Global Village.” The circle of our “locus of awareness” suddenly had to accommodate the entire world but our “locus of control” just couldn’t keep pace.

Even with this expansion of awareness, one could still say that truth remained relatively true. There was an editorial check and balance process that checked the veracity of the information we were presented. It certainly wasn’t perfect, but we could place some confidence in the truth of what we read, saw and heard.

And then came social media. Social media creates a nasty feedback loop when it comes to the truth. Once again, Dr. Panning typified these new anxieties as, “fretting about what’s happening in the country and spending excessive time on social media.” The algorithmic targeting of social media platforms means that you’re getting a filtered version of the truth. Facebook knows exactly what you’re most anxious about and feeds you a steady diet of content tailored specifically to those anxieties. We have the comfort of seeing posts from members of our network that seem to fear the same things we do and share the same beliefs. But the more time we spend seeking this comfort, the more we’re exposed to the anxiety inducing triggers and the further and further we drift from the truth. It creates a downward spiral that leads to these new types of environmental anxiety we are seeing. And to deal with those anxieties we’re developing new strategies for handling the truth – or, at least – our version of the truth. That’s where I’ll pick up next week.

 

Deconstructing the Google/Facebook Duopoly

We’ve all heard about it. The Google/Facebook Duopoly. This was what I was going to write about last week before I got sidetracked. I’m back on track now (or, at least, somewhat back on track). So let’s start by understanding what a duopoly is…

…a situation in which two suppliers dominate the market for a commodity or service.

And this, from Wikipedia…

… In practice, the term is also used where two firms have dominant control over a market.

So, to have a duopoly, you need two things: domination and control. First, let’s deal with the domination question. In 2017, Google and Facebook together took a very healthy 59% slice of all digital ad revenues in the US. Google captured 38.6% of that, with Facebook capturing 20%. That certainly seems dominant. But if online marketing is the market, that is a very large basket with a lot of different items thrown in. So, let’s do a broad categorization to help deconstruct this a bit.  Typically, when I try to understand marketing, I like to start with humans – or more specifically – what that lump of grey matter we call a brain is doing. And if we’re talking about marketing, we’re talking about attention – how our brains are engaging with our environment. That is an interesting way to divide up the market we’re talking about, because it neatly bisects the attentional market, with Google on one side and Facebook on the other.

Google dominates the top down, intent driven, attentionally focused market. If you’re part of this market, you have something in mind and you’re trying to find it. If we use search as a proxy for this attentional state (which is the best proxy I can think of) we see just how dominate Google is. It owns this market to a huge degree. According to Statista, Google has about 87% of the total worldwide search market in April of 2018. The key metric here is success. Google needs to be the best way to fulfill those searches. And if market share is any indication, it is.

Facebook apparently dominates the bottom up awareness market. These are the people killing time online and they are not actively looking with commercial intent. This is more of an awareness play where attention has to be diverted to an advertising message. Therefore, time spent becomes the key factor. You need to be in front of the right eyeballs, and so you need a lot of eyeballs and a way to target to the right ones.

Here is where things get interesting. If we look at share of consumer time, Google dominates here. But there is a huge caveat, which I’ll get to in a second. According to a report this spring by Pivotal Research, Google owns just under 28% of all the time we spend consuming digital content. Facebook has just over a 16% share of this market. So why do we have a duopoly and not a monopoly? It’s because of that caveat – a whopping slice of Google’s “time spent” dominance comes from YouTube. And YouTube has an entirely different attentional profile – one that’s much harder to present advertising against. When you’re watching a video on YouTube, your attention is “locked” on the video. Disrupting that attention erodes the user experience. So Google has had a tough time monetizing YouTube.

According to Seeking Alpha, Google’s search ad business will account for 68% of their total revenue of $77 billion this year. That’s over 52 billion dollars that is in that “top-down” attentionally focused bucket. YouTube, which is very much in the “bottom-up” disruptive bucket, accounts for $12 Billion in advertising revenues. Certainly nothing to sneeze at, but not on the same scale as Google’s search business. Facebook’s revenue, at about $36 B, is also generated by this same “bottom up” market, but they have a different attentional profile. The Facebook user is not as “locked in” as they are on YouTube. With the right targeting tools, something that Facebook has excelled at, you have a decent chance of gaining their attention long enough to notice your ad.

Domination

If we look at the second part of the definition of a duopoly – that of control – we see some potential chinks in the armor of both Google and Facebook. Typically, market control was in the form of physical constraints against the competition. But in this new type of market, the control can only be in the minds of the users. The barriers to competitive entry are all defined in mental terms.

In  Google’s case, they have a single line of defense: they have to be an unbreakable habit. Habits are mental scripts that depend on two things – obvious environmental cues that trigger habitual behavior and acceptable outcomes once the script completes. So, to maintain their habit, Google has to ensure that whatever environment you might be in when searching online for something, Google is just a click or two away. Additionally, they have to meet a certain threshold of success. Habits are tough to break, but there are two areas of vulnerability in Google’s dominance.

Facebook is a little different. They need to be addictive. This is a habit taken to the extreme. Addictions depend on pushing certain reward buttons in the brain that lead to an unhealthy behavioral script which become obsessive. The more addicted you are to Facebook and its properties, the more successful they will be in their dominance of the market. You can see the inherent contradiction here. Despite Facebook’s protests to the contrary, with their current revenue model they can only succeed at the expense of our mental health.

I find these things troubling. When you have two for-profit organizations fighting to dominate a market that is defined in our own minds, you have the potential for a lot of unhealthy corporate decisions.

 

Rethinking Media

I was going to write about the Facebook/Google duopoly, but I got sidetracked by this question, “If Google and Facebook are a duopoly, what is the market they are controlling?” The market, in this case, is online marketing, of which they carve out a whopping 61% of all revenue. That’s advertising revenue. And yet, we have Mark Zuckerberg testifying this spring in front of Congress that he is not a media company…

“I consider us to be a technology company because the primary thing that we do is have engineers who write code and build product and services for other people”

That may be an interesting position to take, but his adoption of a media-based revenue model doesn’t pass the smell test. Facebook makes revenue from advertising and you can only sell advertising if you are a medium. The definition of media literally means an intervening agency that provides a means of communication. The trade-off for providing that means is that you get to monetize it by allowing advertisers to piggyback on that communication flow. There is nothing in the definition of “media” about content creation.

Google has also used this defense. The common thread seems to be that they are exempt from the legal checks and balances normally associated with media because they don’t produce content. But they do accept content, they do have an audience and they do profit from connecting these two through advertising. It is disingenuous to try to split legal hairs in order to avoid the responsibility that comes from their position as a mediator.

But this all brings up the question:  what is “media”? We use the term a lot. It’s in the masthead of this website. It’s on the title slug of this daily column. We have extended our working definition of media, which was formed in an entirely different world, as a guide to what it might be in the future. It’s not working. We should stop.

First of all, definitions depend on stability, and the worlds of media and advertising are definitely not stable. We are in the middle of a massive upheaval. Secondly, definitions are mental labels. Labels are short cuts we use so we don’t have to think about what something really is. And I’m arguing that we should be thinking long and hard about what media is now and what it might become in the future.

I can accept that technology companies want to disintermediate, democratize and eliminate transactional friction. That’s what technology companies do. They embrace elegance –  in the scientific sense – as the simplest possible solution to something. What Facebook and Google have done is simplified the concept of media back to its original definition: the plural of medium, which is something that sits in the middle. In fact – by this definition – Google and Facebook are truer media than CNN, the New York Times or Breitbart. They sit in between content creators and content consumers. They have disintermediated the distribution of content. They are trying to reap all the benefits of a stripped down and more profitable working model of media while trying to downplay the responsibility that comes with the position they now hold. In Facebook’s case, this is particularly worrisome because they are also aggregating and distributing that content in a way that leads to false assumptions and dangerous network effects.

Media as we used to know it gradually evolved a check and balance process of editorial oversight and journalistic integrity that sat between the content they created and the audience that would consume it. Facebook and Google consider those things transactional friction. They were part of an inelegant system. These “technology companies” did their best to eliminate those human dependent checks and balances while retaining the revenue models that used to subsidize them.

We are still going to need media in a technological future. Whether they be platforms or publishers, we are going to depend on and trust certain destinations for our information. We will become their audience and in exchange they will have the opportunity to monetize this audience. All this should not come cheaply. If they are to be our chosen mediators, they have to live up to their end of the bargain.