Why We’re not Ready for AI to Take the Wheel…Yet

It’s interesting to see how we humans assign trust.

Consider the following scenario. At any time, in any city in the world, you will put your life in the hands of a complete stranger in an environment you have no control over without a second thought. We do it every time we hail a cab. We know nothing about the driver or their safety record. We don’t know if they’re a good person or a psychopath. We place trust without any empirical reason to do so.

Yet a number of recent surveys indicate the majority of us don’t trust self-driving cars. A recent survey by AAA found that 71% of us would be afraid to ride in a fully self-driving vehicle. I’m one of them. I’m not sure I could slam the door on a self-driven Uber and relax in the back seat while AI takes the wheel. Yet I pride myself on being a fairly rational person and there are plenty of rational reasons why self-driving cars should be far safer than the human powered equivalents.  Even the most skeptical measured comparisons call it a toss-up.

And that brings us to key point- we don’t assign trust rationally. We do it emotionally. And emotionally, we have a tortured relationship with technology.

The problem here is two-fold. First, our trust mechanisms are built to work best when we’re face-to-face with the potential recipient of trust. Trust evolved to be a human-dependent process. And that brings us to the second problem. Over the last thousand years or so, we have learned how to trust in institutions. But that type of trust is dissolving rapidly.

Author and academic Rachel Botsman has spent over a decade looking at how technology is transforming trust. In an interview with Fast Company, she unpacks this notion of imploding institutional trust, “Whether it’s banks, the media, government, churches . . . this institutional trust that is really important to society is disintegrating at an alarming rate. And so how do we trust people enough to get in a car with a total stranger and yet we don’t trust a banking executive? “

I think this transformation of trust has something to do with the decoupling phenomenon I wrote about last week. When we relied on vertically integrated supply chains, we had no choice but to trust the institutions that were the caretakers of those chains. But now that our markets have flipped from the vertical to the horizontal, we are redefining our notions of trust. We are digitally connecting with strangers through sharing economy platforms like AirBnB and Uber and, in the process, we are finding new signals to indicate when we should trust and when we shouldn’t.

There is another unique aspect to our decision to trust. We tend to trust when it’s expedient to do so. Like so many things in human behavior, trust is just one factor wrapped up in our ongoing risk vs reward calculations. Our emotions will push us to trust when it’s required to get what we want. The fewer the alternatives available to us, the more we tend to trust.

Our lack of trust in self driving vehicles is a more visceral example. I don’t think anyone believes the creators of self-driving technology are out to off our species in a self-driven version of a Mad Max conspiracy. We just aren’t wired to trust machines with our lives. There is an innate human hubris that believes that when it comes to self-preservation, our fates are best left in our hands.

Self-driving proponents believe that with time and exposure, these trust issues will be resolved. The trick to us trusting machines with our lives is to lull us into not thinking about it too much. Millions of us do it every day when we board an airplane. The degree to which our airborne lives are dependent on technology was tragically revealed with the recent Boeing Max incidents. The fact is, if we had any idea how much our living to see tomorrow is dependent on technology, we would dissolve into a shuddering, panic-stricken mess. In this case, ignorance is indeed bliss.

But there are few times when we have to make the same conscious decision to put our lives in the metaphorical hands of a computer to the extent we do in a self-driven car. If we look at how we decide to trust, this an environment strewn with psychological landmines. Remember, we tend to trust when we have no options. And in this case, our option couldn’t be clearer. The steering wheel is right there, begging us to take over. It freaks us out then the car pulls away from the curb and we see the wheel start turning by itself. It’s small wonder that 71% of us are having some control issues.

 

Why Are So Many Companies So Horrible At Responding To Emails?

I love email. I hate 62.4% of the people I email.

Sorry. That’s not quite right. I hate 62.4% of the people I email in the futile expectation of a response…sometime…in the next decade or so (I will get back to the specificity of the 62.4% shortly).  It’s you who suck.

You know who you are. You are the ones who never respond to emails, who force me to send email after email with an escalating tone of prickliness, imploring you to take a few seconds from whatever herculean tasks fill your day to actually acknowledge my existence.

It’s you who force me to continually set aside whatever I’m working on to prod you into doing your damned job! And — often — it is you who causes me to eventually abandon email in exasperation and then sink further into the 7thcircle of customer service hell:  voicemail.

Why am I (and trust me, I’m not alone) so exasperated with you? Allow me to explain.

From our side, when we send an email, we are making a psychological statement about how we expect this communication channel to proceed. We have picked this channel deliberately. It is the right match for the mental prioritization we have given this task.

In 1891, in a speech on his 70th birthday, German scientist Hermann Von Helmholtz explained how ideas came to him  He identified four stages that were later labeled by social psychologist Graham Wallas: Preparation, Incubation, Illumination and Verification. These stages have held up remarkably well against the findings of modern neuroscience. Each of these stages has a distinct cognitive pattern and its own set of communication expectations.

  1. Preparation
    Preparation is gathering the information required for our later decision-making. We are actively foraging, looking for gaps in our current understanding of the situation and tracking down sources of that missing information. Our brains are actively involved in the task, but we also have a realistic expectation of the timeline required. This is the perfect match for email as a channel. We’ll came back to our expectations at this stage in a moment, as it’s key to understanding what a reasonable response time is.
  2. Incubation
    Once we have the information we require, our brain often moves the problem to the back burner. Even though it’s not “top of mind,” this doesn’t mean the brain isn’t still mulling it over. It’s the processing that happens while we’re sleeping or taking a walk. Because the brain isn’t actively working on the problem, there is no real communication needed.
  3. Illumination
    This is the eureka moment. You literally “make up your mind”: the cognitive stars align and you settle on a decision. You are now ready to take action. Again, at this stage, there is little to no outside communication needed.
  4. Verification
    Even though we’ve “made up our mind,” there is still one more step before action. We need to make sure our decision matches what is feasible in the real world. Does our internal reality match the external one? Again, our brains are actively involved, pushing us forward. Again, there is often some type of communication required here.

What we have here — in intelligence terms — is a sensemaking loop. The brain ideally wants this loop to continue smoothly, without interruption. But at two of the stages — the beginning and end — our brain needs to idle, waiting for input from the outside world.

Brains that have put tasks on idle do one of two things: They forget, or they get irritated. There are no other options.

The only variance is the degree of irritation. If the task is not that important to us, we get mildly irritated. The more important the task and the longer we are forced to put it on hold, the more frustrated we get.

Next, let’s talk about expectations. At the Preparation phase, we realize the entire world does not march to the beat of our internal drummer. Using email is our way to accommodate the collective schedules of the world. We are not demanding an immediate response. If we did, we’d use another channel, like a phone or instant messaging. When we use email, we expect those on the receiving end to fit our requirements into their priorities.

A recent survey by Jeff Toister, a customer service consultant, found that 87% of respondents expect a response to their emails within one day. Half of those expect a response in four hours or less. The most demanding are baby boomers — probably because email is still our preferred communication channel.

What we do not expect is for our emails to be completely ignored. Forever.

Yet, according to a recent benchmark study by SuperOffice, that is exactly what happens. 62.4% of businesses contacted with a customer service question in the study never responded. 90.5% never acknowledged receiving an email.  They effectively said to those customers, “Either forget us or get pissed off at us. We don’t really care.”

This lack of response is fine if you really don’t care. I toss a number of emails from my inbox daily without responding. They are a waste of my time. But if you have any expectation of having any type of relationship with the sender, take the time to hit the “reply” button.

There were some red flags that these non-responsive companies had in common. Typically, they could only be contacted through a web form on their site. I know I only fill these out if I have no other choice. If there is a direct email link, I always opt for that. These companies also tended to be smaller and didn’t use auto-responders to confirm a message had been received.

If this sounds like a rant, it is. One of my biggest frustrations is lack of email follow-up. I have found that the bar to surprise and delight me via your email response procedure is incredibly low:

  1. Respond.
  2. Don’t be a complete idiot.

Clear, Simple…and Wrong

For every complex problem there is an answer that is clear, simple, and wrong
H. L. Mencken

We live in a world of complex problems. And – increasingly – we long for simple solutions to those problems. Brexit was a simple answer to a complex problem. Trump’s border wall is a simple answer to a complex problem. The current wave of populism is being driven by the desire for simple answers to complex problems.

But, like H.L. Mencken said…all those answers are wrong.

Even philosophers – who are a pretty complex breed – have embraced the principle of simplicity. William of Ockham, a 14th century Franciscan friar who studied logic, wrote “Entia non sunt multiplicanda praetor necessitate.” This translates as “More things should not be used than are necessary.” It has since been called “Occam’s Razor.” In scientific research, it’s known as the principle of parsimony.

But Occam’s Razor illustrates a short coming of humans. We will look for the simplest solution even if it isn’t the right solution. We forget the “are necessary” part of the principle. The Wikipedia entry for Occam’s Razor includes this caveat, “Occam’s razor only applies when the simple explanation and complex explanation both work equally well. If a more complex explanation does a better job than a simpler one, then you should use the complex explanation.”

This introduces a problem for humans. Simple answers are usually easier for us.  People can grasp them easier.  Given a choice between complex and simple, we almost always default to the simple. For most of our history, this has not been a bad strategy. When all the factors the determine our likelihood to survive are proximate and intending to eat you, simple and fast is almost always the right bet.

But then we humans went and built a complex world. We started connecting things together into extended networks. We exponentially introduced dependencies. Through our ingenuity, we transformed our environments and, in the process, made complexity the rule rather than the exception. Unfortunately, that our brains didn’t keep up. They still operate as if our biggest concerns were to find food and to avoid becoming food.

Our brains are causal inference machines. We assign cause and effect without bothering to determine if we are right.  We are hardwired to go for simple answers. When the world was a pretty simple place, the payoff for cognitively crunching complex questions wasn’t worth it. But that’s no longer the case. And when we mistake correlation for causation, the consequences can be tragic.

Let’s go back to the example of Trump’s Wall. I don’t question that illegal (or legal, for that matter) immigration causes pressures in a society. That’s perfectly natural, no matter where those immigrants are coming from. But it’s also a dynamic and complex problem. There are a myriad of interleaved and inter-dependent factors underlying the visible issue. If we don’t take the time to understand those dynamics of complexity, a simple solution – like a wall – could unleash forces that have drastic and unintended consequences. Even worse, thanks to the nature of complexity, those consequences can be amplified throughout a network.

Simple answers can also provide a false hope that keeps us from digging deeper for the true nature of the problem. It lets us fall into the trap of “one and done” thinking. Why hurt our heads thinking about complex issues when we can put a checkmark beside an item on our to do list and move on to the next one?

According to Ian McKenzie, this predilection for simplicity is also rotting away the creative core of advertising. In an essay he posted on Medium, he points to a backlash against Digital because of its complexity, “Digital is complex. And because the simplicity bias says complicated is bad, digital and data are bad by association. And this can cause smart people trained in traditional thinking to avoid or tamp down digital ideas and tactics because they appear to be at odds with the simplicity dogma.”

Like it or not, we ignore complexity at our peril. As David Krakauer, President of the Santa Fe Institute and William H. Miller Professor of Complex Systems warned, “There is only one Earth and we shall never improve it by acting as if life upon it were simple. Complex systems will not allow it.”

 

Don’t Be So Quick to Eliminate Friction

If you have the mind of an engineer, you hate friction. When you worship at the altar of optimization, friction is something to be ruthlessly eliminated – squeezed out of the equation. Friction equals inefficiency. It saps the energy out of our efforts.  It’s what stands between reality and a perfect market, where commerce theoretically slides effortlessly between participants. Much of what we call tech today is optimized with the goal of eliminating friction.

But there’s another side of friction. And perhaps we shouldn’t be too quick to eliminate it.  Without friction, there would be no traction, so you wouldn’t be able to walk. Your car would have no brakes. Nails, bolts, screws, glue and tape wouldn’t work. Without friction, there would be nothing to keep the world together.

And in society, it’s friction that slows us down and helps us smell the roses. That’s because another word for friction – when we talk about our experiential selves – is savouring.

Take conversations, for instance. A completely efficient, friction free conversation would be pretty damn boring. It would get the required information from participant A to participant B – and vice versa – in the minimum number of words. There would be no embellishment, no nuance, no humanity. It would not be a conversation we would savour.

Savouring is all about slowing down. According to Maggie Pitts, a professor at the University of Arizona who studies how we savour conversations, “Savouring is prolonging, extending, and lingering in a positive or pleasant feeling.” And you can’t prolong anything without friction.

But what about friction in tech itself?  As I said before, the rule of thumb in tech is to eliminate as much friction as possible. But can the elimination of friction go too far? Product designer Jesse Weaver says yes. In an online essay, he says we friction-obsessed humans should pay more attention to the natural world, where friction is still very much alive-and-well, thank you:

“Nature is the ultimate optimizer, having run an endless slate of A/B tests over billions of years at scale. And in nature, friction and inconvenience have stood the test of time. Not only do they remain in abundance, but they’ve proven themselves critical. Nature understands the power of friction while we have become blind to it.”

A couple weeks ago, I wrote about Yerkes-Dodson law; which states that there can be too much of a good thing – or, in this case – too little of a supposedly bad thing. According to a 2012 study, when it comes to assigning value, we actually appreciate a little friction. It’s known as the IKEA effect. There is a sweet spot for optimal effort. Too much and we get frustrated. Too little and we feel that it was too easy. When it’s just right, we have a crappy set of shelves that we love more than we should because we had to figure out how to put them together.

Weaver feels the same is true for tech.  As examples, he points to Amazon’s Dash smart button and Facebook’s Frictionless Sharing. In the first case, Amazon claims the need has been eliminated by voice-activated shopping on Alexa. In the second case, we had legitimate privacy concern. But Weaver speculates that perhaps both things just moved a little too fast for our comfort, removing our sense of control. We need a little bit of friction in the system so we feel we can apply the brakes when required.

If we eliminate too much friction, we’ll slip over that hump into not valuing the tech enabled experiences we’re having. He cites the 2018 World Happiness Report which has been tracking our satisfaction with live on a global basis for over a decade. In that time, despite our tech capabilities increasing exponentially, our happiness has flatlined.

I have issues with his statistical logic – there is a bushel basket full of confounding factors in the comparison he’s trying to make – but I generally agree with Weaver’s hypothesis. We do need some friction in our lives. It applies the brakes to our instincts. It forces us to appreciate the here and now that we’re rushing through. It opens the door to serendipity and makes allowances for savouring.

In the end, we may need a little friction in our lives to appreciate what it means to be human.

 

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

It’s the Fall that’s Gonna Kill You

Butch: I’ll jump first.
Sundance: Nope.
Butch: Then you jump first.
Sundance: No, I said!
Butch: What’s the matter with you?!
Sundance: I can’t swim!
Butch:  Why, you crazy — the fall’ll probably kill ya!

                                     Butch Cassidy and the Sundance Kid – 1969

Last Monday, fellow Insider Steven Rosenbaum asked, “Is Advertising Obsolete?” The column and the post by law professor Ramsi Woodcock that prompted it were both interesting. So were the comments – which were by and large supportive of good advertising.

I won’t rehash Rosenbaum’s column, but it strikes me that we – being the collective we of the MediaPost universe – have been debating whether advertising is good or bad, relevant or obsolete, a trusted source of information or a con job for the ages and we don’t seem to be any closer to an answer.

The reason is that an advertisement is all of those things. But not at the same time.

I used to do behavioral research, specifically eye-tracking. And the end of an eye tracking study, you get what’s called an aggregate heat map. This is the summary of all the eyeball activity of all the participants over the entire duration of all interactions with whatever the image was. These were interesting, but personally I was fascinated with the time slices of the interactions. I found that often, you can learn more about behaviors by looking at who looked at what when. It was only when we looked at interactions on a second by second basis that we started to notice the really interesting patterns emerge. For example, when looking at a new website, men looked immediately at the navigation bar, whereas women were first drawn to the “hero” image. But if you looked at the aggregates – the sum of all scanning activities – the men and women’s images were almost identical.

I believe the same thing is happening when we try to pin down advertising. And it’s because advertising – and our attitudes towards it – change through the life cycle of a brand, or product, or company.

Our relationship with a product or brand can be represented by an inverted U chart, with the vertical axis being awareness/engagement and the horizontal axis being time. Like a zillion other things, our brain defines our relationship with a product or brand by a resource/reward algorithm. Much of human behavior can be attributed to a dynamic tension between opposing forces and this is no exception. Driving us to explore the new are cognitive forces like novelty seeking and changing expectations of utility while things like cognitive lock in and the endowment effect tend to keep us loyal. As we engage with a new product or brand, we climb up the first side of the inverted U. But nothing in nature continues on a straight line, much as every sales manager would love it to. At some point, our engagement will peak and we’ll get itchy feet to try something new. Then we start falling down the descent of the U. And it’s this fall that kills our acceptance of advertising.

2000px-HebbianYerkesDodson.svgThis inverted U shows up all the time in human behavior. We assume you can never have too much of a good thing, but this is almost never true. There’s even a law that defines this, known as the Yerkes-Dodson Law. Developed by psychologists Robert Yerkes and John Dodson in 1908, it plots performance against mental or physical arousal. Predictably, performance increases with how fully we’re engaged with whatever we’re doing – but only up to a point. Then, performance peaks and starts to decline into anxiety.

It’s also why TV show runners are getting smarter about ending a series just as they crest the top of the hump. Hard lessons about the dangers of the decline have been learned by the jumping of multiple sharks.

Our entire relationship with a brand or product is built on the foundation of this inverted U, so it should come as no surprise that our acceptance of advertising for said brand or product also has to be plotted on this same chart. Yet it seems to constantly comes as a surprise for the marketing teams. In the beginning, on the upslope of the upside-down U, we are seeking novelty, and an advertisement for something new fits the bill.

When the inevitable downward curve starts, the sales and marketing teams panic and respond by upping advertising. They do their best to maintain a straight up line, but it’s too late. The gap between their goals and audience acceptance continues to grow as one line is projected upwards and the other curves ever more steeply downwards. Eventually the message is received and the plug is pulled, but the damage has already been done.

When we look at advertising, we have to plot it against this ubiquitous U. And when we talk about advertising, we have to be more careful to define what we’re talking about. If we’re talking specifically, we will all be able to find examples of useful and even welcome ads. But when I talk about the broken contract of advertising, I speak in more general terms. In the digital compression of timelines, we are reaching the peak of advertising effectiveness faster than ever before. And when we hit the decline, we actively reject advertising because we can. We have other alternatives. This decline is dragging the industry down with it. Yes, we can all think of good ads, but the category is suffering from our evolving opinion which is increasingly being formed on the downside of the U.

 

 

Reality Vs Meta-Reality

“I know what I like, and I like what I know;”
Genesis

I watched the Grammys on Sunday night. And as it turned out, I didn’t know what I liked. And I thought I liked what I knew. But by the time I wrote this column (on Monday after the Grammys) I had changed my mind.

And it was all because of the increasing gap between what is real, and what is meta-real.

Real is what we perceive with our senses at the time it happens. Meta-real is how we reshape reality after the fact and then preserve it for future reference. And thanks to social media, the meta-real is a booming business.

Nobel laureate Daniel Kahneman first explored this with his work on the experiencing self and the remembering self. In a stripped-down example, imagine two scenarios. Scenario 1 has your hand immersed for 60 seconds in ice cold water that causes a moderate amount of pain. Scenario 2 has your hand immersed for 90 seconds. The first 60 seconds you’re immersed in water at the same temperature as Scenario 1, but then you leave you hand immersed for an additional 30 seconds while the water is slowly warmed by 1 degree.

After going through both scenarios and being told you have to repeat one of them, which would you choose? Logically speaking, you should choose 1. While uncomfortable, you have the benefit of avoiding an extra 30 seconds of a slightly less painful experience. But for those that went through it, that’s not what happened. Eighty percent who noticed that the water got a bit warmer chose to redo Scenario 2.

It turns out that we have two mental biases that kick in when we remember something we experienced:

  1. Duration doesn’t count
  2. Only the peak (best or worst moment) and the end of the experience are registered.

This applies to a lot more than just cold-water experiments. It also holds true for vacations, medical procedures, movies and even the Grammys. Not only that, there is an additional layer of meta-analysis that shifts us even further from the reality we actually experienced.

After I watched the Grammys, I had my own opinion of which performances I liked and those I didn’t care for. But that opinion was a work in progress. On Monday morning, I searched for “Best moments of Grammys 2019.” Rather quickly, my opinion changed to conform with what I was reading. And those summaries were in turn based on an aggregate of opinions gleaned from social media. It was Wisdom of Crowds – applied retroactively.

The fact is that we don’t trust our own opinions. This is hardwired in us. Conformity is something the majority of us look for. We don’t want to be the only one in the room with a differing opinion. Social psychologist Solomon Asch proved this almost 70 years ago. The difference is that in the Asch experiment, conformity happened in the moment. Now, thanks to our digital environment where opinions on anything can be found at any time, conformity happens after the fact. We “sandbox” our own opinions, waiting until we can see if they match the social media consensus. For almost any event you can name, there is now a market for opinion aggregation and analysis. We take this “meta” data and reshape our own reality to match.

It’s not just the malleability of our reality that is at stake here. Our memories serve as guides for the future. They color the actions we take and the people we become. We evolved as conformists because that was a much surer bet for our survival than relying on our own experiences alone.  But might this be a case of a good thing taken too far? Are we losing too much confidence in the validity of our own thoughts and opinions?

I’m pretty sure doesn’t matter what Gord Hotchkiss thinks about the Grammys of 2019. But I fear there’s much more at stake here.

Marketing Vs. Advertising: Making It Personal

Last year I wrote a lot about the erosion of the advertising bargain between advertisers and their audience. Without rehashing at length, let me summarize by simply stating that we no longer are as accepting of advertising because we now have a choice. One of those columns sparked a podcast on Beancast (the relevant discussion started off the podcast).

As the four panelists – all of whom are marketing/advertising professionals – started debating the topic, they got mired down in the question of what is advertising, and what is marketing. They’re not alone. It confuses me too.

I’ve spent all my life in marketing, but this was a tough column to write. I really had to think about what the essential differences of advertising and marketing were – casting aside the textbook definitions and getting to something that resonated at an intuitive level. I ran into the same conundrum as the panelists. The disruption that is washing over our industry is also washing away the traditional line drawn between the two. So I did what I usually do when I find something intellectually ambiguous and tried to simplify down to the most basic analogy I could think of. When it comes to me – as a person – what would  be equivalent to marketing, what would be advertising, and – just to muddy the waters a little more – what would be branding?  If we can reduce this to something we can gut check, maybe the answers will come more easily.

Let’s start with branding. Your Brand is what people think of you as a person. Are you a gentleman or an asshole? Smart, funny, pedantic, prickly, stunningly stupid? Fat and lazy or lean and athletic. Notice that I said your brand is what other people think of you, not what you think of yourself. How you conduct yourself as a person will influence the opinions of others, but ultimately your brand is arbitrated one person at a time, and you are not that person. Branding involves both parties, but not necessarily at the same time. It can be asynchronous. You live your life and by doing so, you create ripples in the world. People develop opinions of you.

To me, although it involves other people, marketing is somewhat faceless and less intimate. In a way, It’s more unilateral than advertising. Again, to take it back to our personal analogy, marketing is simply the social you – the public extension of who you are. One might say that your personal approach to marketing is you saying “this is me, take it or leave it!”

But advertising is different. It focuses on a specific recipient. It implies a bilateral agreement. Again, analogously speaking, it’s like asking another person for a favor. There is an implicit or explicit exchange of value. It involves an overt attempt to influence.

Let’s further refine this into a single example. You’re invited to a party at a friend’s house. When you walk in the door, everyone glances over to see who’s arrived. When they recognize you, each person immediately has their own idea of who you are and how they feel about you. That is your brand. It has already been formed by your marketing, how you have interacted with others your entire life. At that moment of recognition, your own brand is beyond your control.

But now, you have to mingle. You scan the room and see someone you know who is already talking to someone else. You walk over, hoping to work your way into their conversation. That, right there, is advertising. You’re asking for their attention. They have to decide whether to give it to you or not. How they decide will be dependent on how they feel about you, but it will also depend on what else they’re doing – ie –  how interesting the conversation they’re already engaged in is. Another variable is their expectation of what a conversation with you might hold – the anticipated utility of said conversation. Are you going to tell them some news that would be of great interest to them – ask for a favor – or just bore them to tears? So, the success of the advertising exchange in the eyes of the recipient can be defined by three variables: emotional investment in the advertiser (brand love), openness to interruption and expected utility if interrupted.

If this analogy approximates the truth of what is the essential nature of advertising.  Why do I feel Advertising is doomed? I don’t think it has anything to do with branding. I’ve gone full circle on this, but right now, I believe brands are more important than ever. No, the death of advertising will be attributable to the other two variables: do we want to be interrupted and; if the answer is yes, what do we expect to gain by allowing the interruptions?

First of all, let’s look at our openness to interruption. It may sound counter intuitive, but our obsession with multitasking actually makes us less open to interruption.

Think of how we’re normally exposed to advertising content. It’s typically on a screen of some type. We may be switching back and forth between multiple screens.  And it’s probably right when we’re juggling a full load of enticing cognitive invitations: checking our social media feeds, deciding which video to watch, tracking down a wanted website, trying to load an article that interests us. The expected utility of all these things is high. We have “Fear of Missing Out” – big time! This is just when advertising interrupts us, asking us to pay attention to their message.

“Paying attention” is exactly the right phrase to use. Attention is a finite resource that can be exhausted – and that’s exactly what multi-tasking does. It exhausts our cognitive resources. The brain – in defence – becomes more miserly with those resources. The threshold that must be met to allow the brain to allocate attention goes up. The way the brain does this is not simply to ignore anything not meeting the attention worthy threshold, but to actually mildly trigger a negative reaction, causing a feeling of irritation with whatever it is that is begging for our attention. This is a hardwired response that is meant to condition us for the future. The brain assumes that if we don’t want to be interrupted once, the same rule will hold true for the future. Making us irritated is a way to accomplish this. The reaction of the brain sets up a reinforcing cycle that build up an increasingly antagonistic attitude towards advertising.

Secondly, what is the expected utility of paying attention to advertising? This goes hand in hand with the previous thought – advertising was always type of a toll gate we had to pass through to access content, but now, we have choices. The expected utility of the advertising supported content has been largely removed from the equation, leaving us with just the expected utility of the advertisement itself. The brain is constantly running an algorithm that balances resource allocation against reward and in our new environment, the resource allocation threshold keeps getting higher as the reward keeps getting lower.

Is Google Politically Biased?

As a company, the answer is almost assuredly yes.

But are the search results biased? That’s a much more nuanced question.

Sundar Pinchai testifying before congress

In trying to answer that question last week, Google CEO Sundar Pinchai tried to explain how Google’s algorithm works to Congress’s House Judiciary Committee (which kind of like God explaining how the universe works to my sock, but I digress). One of the catalysts for this latest appearance of a tech was another one of President Trump’s ranting tweets that intimated something was rotten in the Valley of the Silicon:

Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal? 96% of … results on ‘Trump News’ are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

Granted, this tweet is non-factual, devoid of any type of evidence and verging on frothing at the mouth. As just one example, let’s take the 96% number that Trump quotes in the above tweet. That came from a very unscientific straw poll that was done by one reporter on a far right-leaning site called PJ Media. In effect, Trump did exactly what he accuses of Google doing – he cherry-picked his source and called it a fact.

But what Trump has inadvertently put his finger on is the uneasy balance that Google tries to maintain as both a search engine and a publisher. And that’s where the question becomes cloudy. It’s a moral precipice that may be clear in the minds of Google engineers and executives, but it’s far from that in ours.

Google has gone on the record as ensuring their algorithm is apolitical. But based on a recent interview with Google News head Richard Gingras, there is some wiggle room in that assertion. Gingras stated,

“With Google Search, Google News, our platform is the open web itself. We’re not arbiters of truth. We’re not trying to determine what’s good information and what’s not. When I look at Google Search, for instance, our objective – people come to us for answers, and we’re very good at giving them answers. But with many questions, particularly in the area of news and public policy, there is not one single answer. So we see our role as [to] give our users, citizens, the tools and information they need – in an assiduously apolitical fashion – to develop their own critical thinking and hopefully form a more informed opinion.”

But –  in the same interview – he says,

“What we will always do is bias the efforts as best we can toward authoritative content – particularly in the context of breaking news events, because major crises do tend to attract the bad actors.”

So Google does boost news sites that it feels are reputable and it’s these sites – like CNN –  that typically dominate in the results. Do reputable news sources tend to lean left? Probably. But that isn’t Google’s fault. That’s the nature of Open Web. If you use that as your platform, you build in any inherent biases. And the minute you further filter on top of that platform, you leave yourself open to accusations of editorializing.

There is another piece to this puzzle. The fact is that searches on Google are biased, but that bias is entirely intentional. The bias in this case is yours. Search results have been personalized so that they’re more relevant to you. Things like your location, your past search history, the way you structure your query and a number of other signals will be used by Google to filter the results you’re shown. There is no liberal conspiracy. It’s just the way that the search algorithm works. In this way, Google is prone to the same type of filter-bubble problem that Facebook has.  In another interview with Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, he touches on this:

“I was struck by the idea that whereas those arguments seem to work as late as only just a few years ago, they’re increasingly ringing hollow, not just on the side of the conservatives, but also on the liberal side of things as well. And so what I think we’re seeing here is really this view becoming mainstream that these platforms are in fact not neutral, and that they are not providing some objective truth.”

The biggest challenge here lies not in the reality of what Google is or how it works, but in what our perception of Google is. We will never know the inner workings of the Google algorithm, but we do trust in what Google shows us. A lot. In our own research some years ago, we saw a significant lift in consumer trust when brands showed up on top of search results. And this effect was replicated in a recent study that looked at Google’s impact on political beliefs. This study found that voter preferences can shift by as much as 20% due to biased search rankings – and that effect can be even higher in some demographic groups.

If you are the number one channel for information, if you manipulate the ranking of the information in any way and if you wield the power to change a significant percentage of minds based on that ranking – guess what? You are the arbitrator of truth. Like it or not.

The Psychology Behind My NetFlix Watchlist

I live in Canada – which means I’m going into hibernation for the next 5 months. People tell me I should take up a winter activity. I tell them I have one. Bitching. About winter – specifically. You have your hobbies – and I have mine.

The other thing I do in the winter is watch movies. And being a with it, tech-savvy guy, I have cut the cord and get my movie fix through not one, but three streaming services: Netflix, Amazon Prime and Crave (a Canadian service). I’ve discovered that the psychology of Netflix is fascinating. It’s the Paradox of Choice playing out in streaming time. It’s the difference between what we say we do and what we actually do.

For example, I do have a watch list. It has somewhere around a hundred items on it. I’ll probably end up watching about 20% of them. The rest will eventually go gentle into that good Netflix Night. And according to a recent post on Digg, I’m actually doing quite well. According to the admittedly small sample chronicled there, the average completion rate is somewhere between 5 and 15%.

When it comes to compiling viewing choices, I’m an optimizer. And I’m being kind to myself. Others, less kind, refer to it as obsessive behavior. This is referring to satisficing/optimizing spectrum of decision making. I put an irrational amount of energy into the rationalization of my viewing options. The more effort you put into decision making, the closer you are to the optimizing end of the spectrum. If you make choices quickly and with your gut, you’re a satisficer.

What is interesting about Netflix is that it defers the Paradox of Choice. I dealt with this in a previous column. But I admit I’m having second thoughts. Netflix’s watch list provides us with a sort of choosing purgatory..a middle ground where we can save according to the type of watcher we think we are. It’s here where the psychology gets interesting. But before we go there, let’s explore some basic psychological principles that underpin this Netflix paradox of choice.

Of Marshmallows and Will Power

In the 1960’s, Walter Mischel and his colleagues conducted the now famous Marshmallow Test, a longitudinal study that spanned several years. The finding (which currently is in some doubt) was that children who had – when they were quite young – the willpower to resist immediately taking a treat (the marshmallow) put in front of them in return for a promise of a greater treat (two marshmallows)  in 15 minutes would later do substantially better in many aspects of their lives (education, careers, social connections, their health). Without getting into the controversial aspects of the test, let’s just focus on the role of willpower in decision making.

Mischel talks about a hot and cool system of making decisions that involve self-gratification. The “hot” is our emotions and the “cool” is our logic. We all have different set-points in the balance between hot and cool, but where these set points are in each of us depends on will power. The more willpower we have, the more likely it is that we’ll delay an immediate reward in return for a greater reward sometime in the future.

Our ability to rationalize and expend cognitive resources on a decision is directly tied to our willpower. And experts have learned that our will power is a finite resource. The more we use it in a day, the less we have in reserve. Psychologists call this “ego-depletion” And a loss of will power leads to decision fatigue. The more tired we become, the less our brain is willing to work on the decisions we make. In one particularly interesting example, parole boards are much more likely to let prisoners go either first thing in the morning or right after lunch than they are as the day wears on. Making the decision to grant a prisoner his or her freedom is a decision that involves risk. It requires more thought.  Keeping them in prison is a default decision that – cognitively speaking – is a much easier choice.

Netflix and Me: Take Two

Let me now try to rope all this in and apply it to my Netflix viewing choices. When I add something to my watch list, I am making a risk-free decision. I am not committing to watch the movie now. Cognitively, it costs me nothing to hit the little plus icon. Because it’s risk free, I tend to be somewhat aspirational in my entertainment foraging. I add foreign films, documentaries, old classics, independent films and – just to leaven out my selection – the latest audience-friendly blockbusters. When it comes to my watch list additions, I’m pretty eclectic.

Eventually, however, I will come back to this watch list and will actually have to commit 2 hours to watching something. And my choices are very much affected by decision fatigue. When it comes to instant gratification, a blockbuster is an easy choice. It will have lots of action, recognizable and likeable stars, a non-mentally-taxing script – let’s call it the cinematic equivalent of a marshmallow that I can eat right away. All my other watch list choices will probably be more gratifying in the long run, but more mentally taxing in the short term. Am I really in the mood for a European art-house flick? The answer probably depends on my current “ego-depletion” level.

This entire mental framework presents its own paradox of choice to me every time I browse through my watchlist. I know I have previously said the Paradox of Choice isn’t a thing when it comes to Netflix. But I may have changed my mind. I think it depends on what resources we’re allocating. In Barry Schwartz’s book titled the Paradox of Choice, he cites Sheena Iyengar’s famous jam experiment. In that instance, the resource was the cost of jam. In that instance, the resource was the cost of jam. But if we’re talking about 2 hours of my time – at the end of a long day – I have to confess that I struggle with choice, even when it’s already been short listed to a pre-selected list of potential entertainment choices. I find myself defaulting to what seems like a safe choice – a well-known Hollywood movie – only to be disappointed when the credits roll. When I do have the will power to forego the obvious and take a chance on one of my more obscure picks, I’m usually grateful I did.

And yes, I did write an entire column on picking a movie to watch on Netflix. Like I said, it’s winter and I had a lot of time to kill.