The Gap Between People and Platforms

I read with interest fellow Spinner Dave Morgan’s column about how software is destroying advertising agencies, but not the need for them. I do want to chime in on what’s happening in advertising, but I need a little more time to think about it.

What did catch my eye was a comment at the end by Harvard Business School professor Alvin Silk: “You can eliminate the middleman, but not his/her function.”

I think Dave and Alvin have put their collective thumbs on something that extends beyond our industry: the growing gap between people and platforms. I’ll use my current industry as an example – travel. It’s something we all do so we can all relate to it.

Platforms and software have definitely eaten this industry. In terms of travel destination planning, the 800-pound Gorilla is TripAdvisor. It’s impossible to overstate its importance to operators and business owners.  TripAdvisor almost single-handedly ushered in an era of do-it-yourself travel planning. For any destination in the world, we can now find the restaurants, accommodations, tours and attractions that are the favorites of other travellers. It allows us to both discover and filter while planning our next trip, something that was impossible 20 years ago, before TripAdvisor came along.

But for all its benefits, TripAdvisor also leaves some gaps.

The biggest gap in travel is what I’ve heard called the “Other Five.” I live in Canada’s wine country (yes, there is such a thing). Visitors to our valley – the Okanagan – generally come with 5 wineries they have planned to visit. The chances are very good that those wineries were selected with the help of TripAdvisor. But while they’re visiting, they also visit the “other five” – 5 wineries they discovered once they got to the destination. These discoveries depend on more traditional means – either word of mouth or sheer serendipity. And it’s often one of these “other five” that provide the truly memorable and authentic experiences.

That’s the problem with platforms like TripAdvisor, which are based on general popularity and algorithms. Technically, platforms should help you discover the long tail, but they don’t. Everything automatically defaults to the head of the curve. It’s the Matthew Effect applied to travel – advantage accumulates to those already blessed. We all want to see the same things – up to a point.

But then we want to explore the “other five” and that’s where we find the gap between platforms and people. We have been trained by Google not to look beyond the first page of online results. It’s actually worse than that. We don’t typically scan beyond the top five. But – by the very nature of ratings-based algorithms – that is always where you’ll find the “other five.” They languish in the middle of the results, sometimes taking years to bump up even a few spots. It’s why there’s still a market – and a rapidly expanding one at that – for a tour guided by an actual human. Humans can think beyond an algorithm, asking questions about what you like and pulling from their own experience to make very targeted and empathetic suggestions.

The problem with platforms is their preoccupation with scale. They feel they have to be all things to all people. I’ll call it Unicornitis – the obsession with gaining a massive valuation. They approach every potential market focused on how many users they can capture. By doing so, they have to target the lowest common denominator. The web thrives on scale and popularity; the rich get richer and the poor get poorer. Yes, there are niche players out there, but they’re very hard to find. They are the “other five” of the Internet, sitting on the third page of Google results.

This has almost nothing to do with advertising, but I think it’s the same phenomenon at work. As we rely more on software, we gain a false confidence that it replaces human-powered expertise. It doesn’t. And a lot of things can slip through the gap that’s created.

 

The Social Acceptance of Siri

There was a time, not too long ago, when I did a fairly exhaustive series of posts on the acceptance of technology. The psychology of how and when we adopted disruptive tech fascinated me. So Laurie Sullivan’s article on how more people are talking to their phone caught my eye.

If you look at tech acceptance, there are a bucket full of factors you have to consider. Utility, emotions, goals, ease of use, cost and our own attitudes all play a part. But one of the biggest factors is social acceptance. We don’t want to look like a moron in front of friends and family. It was this, more than anything else, that killed Google Glass the first time around. Call it the Glasshole factor.

So, back to Laurie’s article and the survey she referred to in it. Which shifts in the social universe are making it more acceptable to shoot the shit with Siri?

The survey has been done for the last three years by Stone Temple, so we’re starting to see some emerging trends. And here are the things that caught my attention. First of all, the biggest shifts from 2017 to 2019, in terms of percentage, are: at the gym, in Public Restrooms and in the Theatre. Usage at home has actually slipped a little (one might assume that these conversations have migrated to Alexa and other home-based digital assistants). If we’re looking at acceptance of technology and the factors driving it, one thing jumps out from the survey. All the shifts are to do with how comfortable we feel talking to our phone in publicly visible situations. There is obviously a moving threshold of acceptability here.

As I mentioned, the three social “safe zones” – those instances where we wouldn’t be judged for speaking to our phones – have shown little movement in the last three years. These are “Home Alone”, “Home with Friends” (public but presumably safe from social judgment), and “Office Alone.” As much as possible in survey-based research, this isolates the social factor from all the other variables rather nicely and shows its importance in our collective jumping on the voice technology band wagon.

This highlights an important lesson is acceptance of new technologies: you have to budget in the time required for society to absorb and accept new technologies. The more that the technology will be utilized in visibly social situations, the more time you need to budget. Otherwise, the tech will only be adopted by a tiny group of socially obtuse techno-weenies and will be stranded on the wrong side of the bleeding edge. As technology becomes more personal and tags along with us in more situations, the designers and marketers of that tech will have to understand this.

This places technology acceptance in a whole new ball park. As the tech we use increasingly becomes part of our own social facing brand, our carefully constructed personas and the social norms we have in place become key factors that determine the pace of acceptance.

This becomes a delicate balancing act. How do you control social acceptance? As an example, let’s take out one of my favorite marketing punching bags – influencer marketing – and see if we could accelerate acceptance by seeding tech acceptance with a few key social connectors. That same strategy failed miserably when it came to promoting Google Glass to the public. And there’s a perfectly irrational reason for it. It has nothing to do with rational stuff like use cases, aesthetics or technology. It had to do with Google picking the wrong influencers – the so-called Google Glass Explorers. As a group, they tended to be tech-obsessed, socially awkward and painfully uncool. They were the people you avoid getting stuck in the corner with at a party because you just aren’t up for a 90-minute conversation on the importance of regular hard drive hygiene. No one wants to be them.

If this survey tells us anything, it tells us that – sometimes – you just have to hope and wait. Ever since Everett Rogers first sketched it out in 1962, we’ve known that innovation diffusion happens on a bell curve. Some innovations get stranded on the upside of the slope and wither away to nothingness while some make it over the hump and become part of our everyday lives. Three years ago, there were certainly people talking to their phones on buses, in gyms and at movie theatres. They didn’t care if they were judged for it. But most of us did care. Today, apparently, the social stigma has disappeared for many of us. We were just waiting for the right time – and the right company.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

Reality Vs Meta-Reality

“I know what I like, and I like what I know;”
Genesis

I watched the Grammys on Sunday night. And as it turned out, I didn’t know what I liked. And I thought I liked what I knew. But by the time I wrote this column (on Monday after the Grammys) I had changed my mind.

And it was all because of the increasing gap between what is real, and what is meta-real.

Real is what we perceive with our senses at the time it happens. Meta-real is how we reshape reality after the fact and then preserve it for future reference. And thanks to social media, the meta-real is a booming business.

Nobel laureate Daniel Kahneman first explored this with his work on the experiencing self and the remembering self. In a stripped-down example, imagine two scenarios. Scenario 1 has your hand immersed for 60 seconds in ice cold water that causes a moderate amount of pain. Scenario 2 has your hand immersed for 90 seconds. The first 60 seconds you’re immersed in water at the same temperature as Scenario 1, but then you leave you hand immersed for an additional 30 seconds while the water is slowly warmed by 1 degree.

After going through both scenarios and being told you have to repeat one of them, which would you choose? Logically speaking, you should choose 1. While uncomfortable, you have the benefit of avoiding an extra 30 seconds of a slightly less painful experience. But for those that went through it, that’s not what happened. Eighty percent who noticed that the water got a bit warmer chose to redo Scenario 2.

It turns out that we have two mental biases that kick in when we remember something we experienced:

  1. Duration doesn’t count
  2. Only the peak (best or worst moment) and the end of the experience are registered.

This applies to a lot more than just cold-water experiments. It also holds true for vacations, medical procedures, movies and even the Grammys. Not only that, there is an additional layer of meta-analysis that shifts us even further from the reality we actually experienced.

After I watched the Grammys, I had my own opinion of which performances I liked and those I didn’t care for. But that opinion was a work in progress. On Monday morning, I searched for “Best moments of Grammys 2019.” Rather quickly, my opinion changed to conform with what I was reading. And those summaries were in turn based on an aggregate of opinions gleaned from social media. It was Wisdom of Crowds – applied retroactively.

The fact is that we don’t trust our own opinions. This is hardwired in us. Conformity is something the majority of us look for. We don’t want to be the only one in the room with a differing opinion. Social psychologist Solomon Asch proved this almost 70 years ago. The difference is that in the Asch experiment, conformity happened in the moment. Now, thanks to our digital environment where opinions on anything can be found at any time, conformity happens after the fact. We “sandbox” our own opinions, waiting until we can see if they match the social media consensus. For almost any event you can name, there is now a market for opinion aggregation and analysis. We take this “meta” data and reshape our own reality to match.

It’s not just the malleability of our reality that is at stake here. Our memories serve as guides for the future. They color the actions we take and the people we become. We evolved as conformists because that was a much surer bet for our survival than relying on our own experiences alone.  But might this be a case of a good thing taken too far? Are we losing too much confidence in the validity of our own thoughts and opinions?

I’m pretty sure doesn’t matter what Gord Hotchkiss thinks about the Grammys of 2019. But I fear there’s much more at stake here.

Marketing Vs. Advertising: Making It Personal

Last year I wrote a lot about the erosion of the advertising bargain between advertisers and their audience. Without rehashing at length, let me summarize by simply stating that we no longer are as accepting of advertising because we now have a choice. One of those columns sparked a podcast on Beancast (the relevant discussion started off the podcast).

As the four panelists – all of whom are marketing/advertising professionals – started debating the topic, they got mired down in the question of what is advertising, and what is marketing. They’re not alone. It confuses me too.

I’ve spent all my life in marketing, but this was a tough column to write. I really had to think about what the essential differences of advertising and marketing were – casting aside the textbook definitions and getting to something that resonated at an intuitive level. I ran into the same conundrum as the panelists. The disruption that is washing over our industry is also washing away the traditional line drawn between the two. So I did what I usually do when I find something intellectually ambiguous and tried to simplify down to the most basic analogy I could think of. When it comes to me – as a person – what would  be equivalent to marketing, what would be advertising, and – just to muddy the waters a little more – what would be branding?  If we can reduce this to something we can gut check, maybe the answers will come more easily.

Let’s start with branding. Your Brand is what people think of you as a person. Are you a gentleman or an asshole? Smart, funny, pedantic, prickly, stunningly stupid? Fat and lazy or lean and athletic. Notice that I said your brand is what other people think of you, not what you think of yourself. How you conduct yourself as a person will influence the opinions of others, but ultimately your brand is arbitrated one person at a time, and you are not that person. Branding involves both parties, but not necessarily at the same time. It can be asynchronous. You live your life and by doing so, you create ripples in the world. People develop opinions of you.

To me, although it involves other people, marketing is somewhat faceless and less intimate. In a way, It’s more unilateral than advertising. Again, to take it back to our personal analogy, marketing is simply the social you – the public extension of who you are. One might say that your personal approach to marketing is you saying “this is me, take it or leave it!”

But advertising is different. It focuses on a specific recipient. It implies a bilateral agreement. Again, analogously speaking, it’s like asking another person for a favor. There is an implicit or explicit exchange of value. It involves an overt attempt to influence.

Let’s further refine this into a single example. You’re invited to a party at a friend’s house. When you walk in the door, everyone glances over to see who’s arrived. When they recognize you, each person immediately has their own idea of who you are and how they feel about you. That is your brand. It has already been formed by your marketing, how you have interacted with others your entire life. At that moment of recognition, your own brand is beyond your control.

But now, you have to mingle. You scan the room and see someone you know who is already talking to someone else. You walk over, hoping to work your way into their conversation. That, right there, is advertising. You’re asking for their attention. They have to decide whether to give it to you or not. How they decide will be dependent on how they feel about you, but it will also depend on what else they’re doing – ie –  how interesting the conversation they’re already engaged in is. Another variable is their expectation of what a conversation with you might hold – the anticipated utility of said conversation. Are you going to tell them some news that would be of great interest to them – ask for a favor – or just bore them to tears? So, the success of the advertising exchange in the eyes of the recipient can be defined by three variables: emotional investment in the advertiser (brand love), openness to interruption and expected utility if interrupted.

If this analogy approximates the truth of what is the essential nature of advertising.  Why do I feel Advertising is doomed? I don’t think it has anything to do with branding. I’ve gone full circle on this, but right now, I believe brands are more important than ever. No, the death of advertising will be attributable to the other two variables: do we want to be interrupted and; if the answer is yes, what do we expect to gain by allowing the interruptions?

First of all, let’s look at our openness to interruption. It may sound counter intuitive, but our obsession with multitasking actually makes us less open to interruption.

Think of how we’re normally exposed to advertising content. It’s typically on a screen of some type. We may be switching back and forth between multiple screens.  And it’s probably right when we’re juggling a full load of enticing cognitive invitations: checking our social media feeds, deciding which video to watch, tracking down a wanted website, trying to load an article that interests us. The expected utility of all these things is high. We have “Fear of Missing Out” – big time! This is just when advertising interrupts us, asking us to pay attention to their message.

“Paying attention” is exactly the right phrase to use. Attention is a finite resource that can be exhausted – and that’s exactly what multi-tasking does. It exhausts our cognitive resources. The brain – in defence – becomes more miserly with those resources. The threshold that must be met to allow the brain to allocate attention goes up. The way the brain does this is not simply to ignore anything not meeting the attention worthy threshold, but to actually mildly trigger a negative reaction, causing a feeling of irritation with whatever it is that is begging for our attention. This is a hardwired response that is meant to condition us for the future. The brain assumes that if we don’t want to be interrupted once, the same rule will hold true for the future. Making us irritated is a way to accomplish this. The reaction of the brain sets up a reinforcing cycle that build up an increasingly antagonistic attitude towards advertising.

Secondly, what is the expected utility of paying attention to advertising? This goes hand in hand with the previous thought – advertising was always type of a toll gate we had to pass through to access content, but now, we have choices. The expected utility of the advertising supported content has been largely removed from the equation, leaving us with just the expected utility of the advertisement itself. The brain is constantly running an algorithm that balances resource allocation against reward and in our new environment, the resource allocation threshold keeps getting higher as the reward keeps getting lower.

Is Google Politically Biased?

As a company, the answer is almost assuredly yes.

But are the search results biased? That’s a much more nuanced question.

Sundar Pinchai testifying before congress

In trying to answer that question last week, Google CEO Sundar Pinchai tried to explain how Google’s algorithm works to Congress’s House Judiciary Committee (which kind of like God explaining how the universe works to my sock, but I digress). One of the catalysts for this latest appearance of a tech was another one of President Trump’s ranting tweets that intimated something was rotten in the Valley of the Silicon:

Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal? 96% of … results on ‘Trump News’ are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

Granted, this tweet is non-factual, devoid of any type of evidence and verging on frothing at the mouth. As just one example, let’s take the 96% number that Trump quotes in the above tweet. That came from a very unscientific straw poll that was done by one reporter on a far right-leaning site called PJ Media. In effect, Trump did exactly what he accuses of Google doing – he cherry-picked his source and called it a fact.

But what Trump has inadvertently put his finger on is the uneasy balance that Google tries to maintain as both a search engine and a publisher. And that’s where the question becomes cloudy. It’s a moral precipice that may be clear in the minds of Google engineers and executives, but it’s far from that in ours.

Google has gone on the record as ensuring their algorithm is apolitical. But based on a recent interview with Google News head Richard Gingras, there is some wiggle room in that assertion. Gingras stated,

“With Google Search, Google News, our platform is the open web itself. We’re not arbiters of truth. We’re not trying to determine what’s good information and what’s not. When I look at Google Search, for instance, our objective – people come to us for answers, and we’re very good at giving them answers. But with many questions, particularly in the area of news and public policy, there is not one single answer. So we see our role as [to] give our users, citizens, the tools and information they need – in an assiduously apolitical fashion – to develop their own critical thinking and hopefully form a more informed opinion.”

But –  in the same interview – he says,

“What we will always do is bias the efforts as best we can toward authoritative content – particularly in the context of breaking news events, because major crises do tend to attract the bad actors.”

So Google does boost news sites that it feels are reputable and it’s these sites – like CNN –  that typically dominate in the results. Do reputable news sources tend to lean left? Probably. But that isn’t Google’s fault. That’s the nature of Open Web. If you use that as your platform, you build in any inherent biases. And the minute you further filter on top of that platform, you leave yourself open to accusations of editorializing.

There is another piece to this puzzle. The fact is that searches on Google are biased, but that bias is entirely intentional. The bias in this case is yours. Search results have been personalized so that they’re more relevant to you. Things like your location, your past search history, the way you structure your query and a number of other signals will be used by Google to filter the results you’re shown. There is no liberal conspiracy. It’s just the way that the search algorithm works. In this way, Google is prone to the same type of filter-bubble problem that Facebook has.  In another interview with Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, he touches on this:

“I was struck by the idea that whereas those arguments seem to work as late as only just a few years ago, they’re increasingly ringing hollow, not just on the side of the conservatives, but also on the liberal side of things as well. And so what I think we’re seeing here is really this view becoming mainstream that these platforms are in fact not neutral, and that they are not providing some objective truth.”

The biggest challenge here lies not in the reality of what Google is or how it works, but in what our perception of Google is. We will never know the inner workings of the Google algorithm, but we do trust in what Google shows us. A lot. In our own research some years ago, we saw a significant lift in consumer trust when brands showed up on top of search results. And this effect was replicated in a recent study that looked at Google’s impact on political beliefs. This study found that voter preferences can shift by as much as 20% due to biased search rankings – and that effect can be even higher in some demographic groups.

If you are the number one channel for information, if you manipulate the ranking of the information in any way and if you wield the power to change a significant percentage of minds based on that ranking – guess what? You are the arbitrator of truth. Like it or not.

The Psychology Behind My NetFlix Watchlist

I live in Canada – which means I’m going into hibernation for the next 5 months. People tell me I should take up a winter activity. I tell them I have one. Bitching. About winter – specifically. You have your hobbies – and I have mine.

The other thing I do in the winter is watch movies. And being a with it, tech-savvy guy, I have cut the cord and get my movie fix through not one, but three streaming services: Netflix, Amazon Prime and Crave (a Canadian service). I’ve discovered that the psychology of Netflix is fascinating. It’s the Paradox of Choice playing out in streaming time. It’s the difference between what we say we do and what we actually do.

For example, I do have a watch list. It has somewhere around a hundred items on it. I’ll probably end up watching about 20% of them. The rest will eventually go gentle into that good Netflix Night. And according to a recent post on Digg, I’m actually doing quite well. According to the admittedly small sample chronicled there, the average completion rate is somewhere between 5 and 15%.

When it comes to compiling viewing choices, I’m an optimizer. And I’m being kind to myself. Others, less kind, refer to it as obsessive behavior. This is referring to satisficing/optimizing spectrum of decision making. I put an irrational amount of energy into the rationalization of my viewing options. The more effort you put into decision making, the closer you are to the optimizing end of the spectrum. If you make choices quickly and with your gut, you’re a satisficer.

What is interesting about Netflix is that it defers the Paradox of Choice. I dealt with this in a previous column. But I admit I’m having second thoughts. Netflix’s watch list provides us with a sort of choosing purgatory..a middle ground where we can save according to the type of watcher we think we are. It’s here where the psychology gets interesting. But before we go there, let’s explore some basic psychological principles that underpin this Netflix paradox of choice.

Of Marshmallows and Will Power

In the 1960’s, Walter Mischel and his colleagues conducted the now famous Marshmallow Test, a longitudinal study that spanned several years. The finding (which currently is in some doubt) was that children who had – when they were quite young – the willpower to resist immediately taking a treat (the marshmallow) put in front of them in return for a promise of a greater treat (two marshmallows)  in 15 minutes would later do substantially better in many aspects of their lives (education, careers, social connections, their health). Without getting into the controversial aspects of the test, let’s just focus on the role of willpower in decision making.

Mischel talks about a hot and cool system of making decisions that involve self-gratification. The “hot” is our emotions and the “cool” is our logic. We all have different set-points in the balance between hot and cool, but where these set points are in each of us depends on will power. The more willpower we have, the more likely it is that we’ll delay an immediate reward in return for a greater reward sometime in the future.

Our ability to rationalize and expend cognitive resources on a decision is directly tied to our willpower. And experts have learned that our will power is a finite resource. The more we use it in a day, the less we have in reserve. Psychologists call this “ego-depletion” And a loss of will power leads to decision fatigue. The more tired we become, the less our brain is willing to work on the decisions we make. In one particularly interesting example, parole boards are much more likely to let prisoners go either first thing in the morning or right after lunch than they are as the day wears on. Making the decision to grant a prisoner his or her freedom is a decision that involves risk. It requires more thought.  Keeping them in prison is a default decision that – cognitively speaking – is a much easier choice.

Netflix and Me: Take Two

Let me now try to rope all this in and apply it to my Netflix viewing choices. When I add something to my watch list, I am making a risk-free decision. I am not committing to watch the movie now. Cognitively, it costs me nothing to hit the little plus icon. Because it’s risk free, I tend to be somewhat aspirational in my entertainment foraging. I add foreign films, documentaries, old classics, independent films and – just to leaven out my selection – the latest audience-friendly blockbusters. When it comes to my watch list additions, I’m pretty eclectic.

Eventually, however, I will come back to this watch list and will actually have to commit 2 hours to watching something. And my choices are very much affected by decision fatigue. When it comes to instant gratification, a blockbuster is an easy choice. It will have lots of action, recognizable and likeable stars, a non-mentally-taxing script – let’s call it the cinematic equivalent of a marshmallow that I can eat right away. All my other watch list choices will probably be more gratifying in the long run, but more mentally taxing in the short term. Am I really in the mood for a European art-house flick? The answer probably depends on my current “ego-depletion” level.

This entire mental framework presents its own paradox of choice to me every time I browse through my watchlist. I know I have previously said the Paradox of Choice isn’t a thing when it comes to Netflix. But I may have changed my mind. I think it depends on what resources we’re allocating. In Barry Schwartz’s book titled the Paradox of Choice, he cites Sheena Iyengar’s famous jam experiment. In that instance, the resource was the cost of jam. In that instance, the resource was the cost of jam. But if we’re talking about 2 hours of my time – at the end of a long day – I have to confess that I struggle with choice, even when it’s already been short listed to a pre-selected list of potential entertainment choices. I find myself defaulting to what seems like a safe choice – a well-known Hollywood movie – only to be disappointed when the credits roll. When I do have the will power to forego the obvious and take a chance on one of my more obscure picks, I’m usually grateful I did.

And yes, I did write an entire column on picking a movie to watch on Netflix. Like I said, it’s winter and I had a lot of time to kill.