Why We’re not Ready for AI to Take the Wheel…Yet

It’s interesting to see how we humans assign trust.

Consider the following scenario. At any time, in any city in the world, you will put your life in the hands of a complete stranger in an environment you have no control over without a second thought. We do it every time we hail a cab. We know nothing about the driver or their safety record. We don’t know if they’re a good person or a psychopath. We place trust without any empirical reason to do so.

Yet a number of recent surveys indicate the majority of us don’t trust self-driving cars. A recent survey by AAA found that 71% of us would be afraid to ride in a fully self-driving vehicle. I’m one of them. I’m not sure I could slam the door on a self-driven Uber and relax in the back seat while AI takes the wheel. Yet I pride myself on being a fairly rational person and there are plenty of rational reasons why self-driving cars should be far safer than the human powered equivalents.  Even the most skeptical measured comparisons call it a toss-up.

And that brings us to key point- we don’t assign trust rationally. We do it emotionally. And emotionally, we have a tortured relationship with technology.

The problem here is two-fold. First, our trust mechanisms are built to work best when we’re face-to-face with the potential recipient of trust. Trust evolved to be a human-dependent process. And that brings us to the second problem. Over the last thousand years or so, we have learned how to trust in institutions. But that type of trust is dissolving rapidly.

Author and academic Rachel Botsman has spent over a decade looking at how technology is transforming trust. In an interview with Fast Company, she unpacks this notion of imploding institutional trust, “Whether it’s banks, the media, government, churches . . . this institutional trust that is really important to society is disintegrating at an alarming rate. And so how do we trust people enough to get in a car with a total stranger and yet we don’t trust a banking executive? “

I think this transformation of trust has something to do with the decoupling phenomenon I wrote about last week. When we relied on vertically integrated supply chains, we had no choice but to trust the institutions that were the caretakers of those chains. But now that our markets have flipped from the vertical to the horizontal, we are redefining our notions of trust. We are digitally connecting with strangers through sharing economy platforms like AirBnB and Uber and, in the process, we are finding new signals to indicate when we should trust and when we shouldn’t.

There is another unique aspect to our decision to trust. We tend to trust when it’s expedient to do so. Like so many things in human behavior, trust is just one factor wrapped up in our ongoing risk vs reward calculations. Our emotions will push us to trust when it’s required to get what we want. The fewer the alternatives available to us, the more we tend to trust.

Our lack of trust in self driving vehicles is a more visceral example. I don’t think anyone believes the creators of self-driving technology are out to off our species in a self-driven version of a Mad Max conspiracy. We just aren’t wired to trust machines with our lives. There is an innate human hubris that believes that when it comes to self-preservation, our fates are best left in our hands.

Self-driving proponents believe that with time and exposure, these trust issues will be resolved. The trick to us trusting machines with our lives is to lull us into not thinking about it too much. Millions of us do it every day when we board an airplane. The degree to which our airborne lives are dependent on technology was tragically revealed with the recent Boeing Max incidents. The fact is, if we had any idea how much our living to see tomorrow is dependent on technology, we would dissolve into a shuddering, panic-stricken mess. In this case, ignorance is indeed bliss.

But there are few times when we have to make the same conscious decision to put our lives in the metaphorical hands of a computer to the extent we do in a self-driven car. If we look at how we decide to trust, this an environment strewn with psychological landmines. Remember, we tend to trust when we have no options. And in this case, our option couldn’t be clearer. The steering wheel is right there, begging us to take over. It freaks us out then the car pulls away from the curb and we see the wheel start turning by itself. It’s small wonder that 71% of us are having some control issues.

 

Why Are So Many Companies So Horrible At Responding To Emails?

I love email. I hate 62.4% of the people I email.

Sorry. That’s not quite right. I hate 62.4% of the people I email in the futile expectation of a response…sometime…in the next decade or so (I will get back to the specificity of the 62.4% shortly).  It’s you who suck.

You know who you are. You are the ones who never respond to emails, who force me to send email after email with an escalating tone of prickliness, imploring you to take a few seconds from whatever herculean tasks fill your day to actually acknowledge my existence.

It’s you who force me to continually set aside whatever I’m working on to prod you into doing your damned job! And — often — it is you who causes me to eventually abandon email in exasperation and then sink further into the 7thcircle of customer service hell:  voicemail.

Why am I (and trust me, I’m not alone) so exasperated with you? Allow me to explain.

From our side, when we send an email, we are making a psychological statement about how we expect this communication channel to proceed. We have picked this channel deliberately. It is the right match for the mental prioritization we have given this task.

In 1891, in a speech on his 70th birthday, German scientist Hermann Von Helmholtz explained how ideas came to him  He identified four stages that were later labeled by social psychologist Graham Wallas: Preparation, Incubation, Illumination and Verification. These stages have held up remarkably well against the findings of modern neuroscience. Each of these stages has a distinct cognitive pattern and its own set of communication expectations.

  1. Preparation
    Preparation is gathering the information required for our later decision-making. We are actively foraging, looking for gaps in our current understanding of the situation and tracking down sources of that missing information. Our brains are actively involved in the task, but we also have a realistic expectation of the timeline required. This is the perfect match for email as a channel. We’ll came back to our expectations at this stage in a moment, as it’s key to understanding what a reasonable response time is.
  2. Incubation
    Once we have the information we require, our brain often moves the problem to the back burner. Even though it’s not “top of mind,” this doesn’t mean the brain isn’t still mulling it over. It’s the processing that happens while we’re sleeping or taking a walk. Because the brain isn’t actively working on the problem, there is no real communication needed.
  3. Illumination
    This is the eureka moment. You literally “make up your mind”: the cognitive stars align and you settle on a decision. You are now ready to take action. Again, at this stage, there is little to no outside communication needed.
  4. Verification
    Even though we’ve “made up our mind,” there is still one more step before action. We need to make sure our decision matches what is feasible in the real world. Does our internal reality match the external one? Again, our brains are actively involved, pushing us forward. Again, there is often some type of communication required here.

What we have here — in intelligence terms — is a sensemaking loop. The brain ideally wants this loop to continue smoothly, without interruption. But at two of the stages — the beginning and end — our brain needs to idle, waiting for input from the outside world.

Brains that have put tasks on idle do one of two things: They forget, or they get irritated. There are no other options.

The only variance is the degree of irritation. If the task is not that important to us, we get mildly irritated. The more important the task and the longer we are forced to put it on hold, the more frustrated we get.

Next, let’s talk about expectations. At the Preparation phase, we realize the entire world does not march to the beat of our internal drummer. Using email is our way to accommodate the collective schedules of the world. We are not demanding an immediate response. If we did, we’d use another channel, like a phone or instant messaging. When we use email, we expect those on the receiving end to fit our requirements into their priorities.

A recent survey by Jeff Toister, a customer service consultant, found that 87% of respondents expect a response to their emails within one day. Half of those expect a response in four hours or less. The most demanding are baby boomers — probably because email is still our preferred communication channel.

What we do not expect is for our emails to be completely ignored. Forever.

Yet, according to a recent benchmark study by SuperOffice, that is exactly what happens. 62.4% of businesses contacted with a customer service question in the study never responded. 90.5% never acknowledged receiving an email.  They effectively said to those customers, “Either forget us or get pissed off at us. We don’t really care.”

This lack of response is fine if you really don’t care. I toss a number of emails from my inbox daily without responding. They are a waste of my time. But if you have any expectation of having any type of relationship with the sender, take the time to hit the “reply” button.

There were some red flags that these non-responsive companies had in common. Typically, they could only be contacted through a web form on their site. I know I only fill these out if I have no other choice. If there is a direct email link, I always opt for that. These companies also tended to be smaller and didn’t use auto-responders to confirm a message had been received.

If this sounds like a rant, it is. One of my biggest frustrations is lack of email follow-up. I have found that the bar to surprise and delight me via your email response procedure is incredibly low:

  1. Respond.
  2. Don’t be a complete idiot.

Clear, Simple…and Wrong

For every complex problem there is an answer that is clear, simple, and wrong
H. L. Mencken

We live in a world of complex problems. And – increasingly – we long for simple solutions to those problems. Brexit was a simple answer to a complex problem. Trump’s border wall is a simple answer to a complex problem. The current wave of populism is being driven by the desire for simple answers to complex problems.

But, like H.L. Mencken said…all those answers are wrong.

Even philosophers – who are a pretty complex breed – have embraced the principle of simplicity. William of Ockham, a 14th century Franciscan friar who studied logic, wrote “Entia non sunt multiplicanda praetor necessitate.” This translates as “More things should not be used than are necessary.” It has since been called “Occam’s Razor.” In scientific research, it’s known as the principle of parsimony.

But Occam’s Razor illustrates a short coming of humans. We will look for the simplest solution even if it isn’t the right solution. We forget the “are necessary” part of the principle. The Wikipedia entry for Occam’s Razor includes this caveat, “Occam’s razor only applies when the simple explanation and complex explanation both work equally well. If a more complex explanation does a better job than a simpler one, then you should use the complex explanation.”

This introduces a problem for humans. Simple answers are usually easier for us.  People can grasp them easier.  Given a choice between complex and simple, we almost always default to the simple. For most of our history, this has not been a bad strategy. When all the factors the determine our likelihood to survive are proximate and intending to eat you, simple and fast is almost always the right bet.

But then we humans went and built a complex world. We started connecting things together into extended networks. We exponentially introduced dependencies. Through our ingenuity, we transformed our environments and, in the process, made complexity the rule rather than the exception. Unfortunately, that our brains didn’t keep up. They still operate as if our biggest concerns were to find food and to avoid becoming food.

Our brains are causal inference machines. We assign cause and effect without bothering to determine if we are right.  We are hardwired to go for simple answers. When the world was a pretty simple place, the payoff for cognitively crunching complex questions wasn’t worth it. But that’s no longer the case. And when we mistake correlation for causation, the consequences can be tragic.

Let’s go back to the example of Trump’s Wall. I don’t question that illegal (or legal, for that matter) immigration causes pressures in a society. That’s perfectly natural, no matter where those immigrants are coming from. But it’s also a dynamic and complex problem. There are a myriad of interleaved and inter-dependent factors underlying the visible issue. If we don’t take the time to understand those dynamics of complexity, a simple solution – like a wall – could unleash forces that have drastic and unintended consequences. Even worse, thanks to the nature of complexity, those consequences can be amplified throughout a network.

Simple answers can also provide a false hope that keeps us from digging deeper for the true nature of the problem. It lets us fall into the trap of “one and done” thinking. Why hurt our heads thinking about complex issues when we can put a checkmark beside an item on our to do list and move on to the next one?

According to Ian McKenzie, this predilection for simplicity is also rotting away the creative core of advertising. In an essay he posted on Medium, he points to a backlash against Digital because of its complexity, “Digital is complex. And because the simplicity bias says complicated is bad, digital and data are bad by association. And this can cause smart people trained in traditional thinking to avoid or tamp down digital ideas and tactics because they appear to be at odds with the simplicity dogma.”

Like it or not, we ignore complexity at our peril. As David Krakauer, President of the Santa Fe Institute and William H. Miller Professor of Complex Systems warned, “There is only one Earth and we shall never improve it by acting as if life upon it were simple. Complex systems will not allow it.”

 

Don’t Be So Quick to Eliminate Friction

If you have the mind of an engineer, you hate friction. When you worship at the altar of optimization, friction is something to be ruthlessly eliminated – squeezed out of the equation. Friction equals inefficiency. It saps the energy out of our efforts.  It’s what stands between reality and a perfect market, where commerce theoretically slides effortlessly between participants. Much of what we call tech today is optimized with the goal of eliminating friction.

But there’s another side of friction. And perhaps we shouldn’t be too quick to eliminate it.  Without friction, there would be no traction, so you wouldn’t be able to walk. Your car would have no brakes. Nails, bolts, screws, glue and tape wouldn’t work. Without friction, there would be nothing to keep the world together.

And in society, it’s friction that slows us down and helps us smell the roses. That’s because another word for friction – when we talk about our experiential selves – is savouring.

Take conversations, for instance. A completely efficient, friction free conversation would be pretty damn boring. It would get the required information from participant A to participant B – and vice versa – in the minimum number of words. There would be no embellishment, no nuance, no humanity. It would not be a conversation we would savour.

Savouring is all about slowing down. According to Maggie Pitts, a professor at the University of Arizona who studies how we savour conversations, “Savouring is prolonging, extending, and lingering in a positive or pleasant feeling.” And you can’t prolong anything without friction.

But what about friction in tech itself?  As I said before, the rule of thumb in tech is to eliminate as much friction as possible. But can the elimination of friction go too far? Product designer Jesse Weaver says yes. In an online essay, he says we friction-obsessed humans should pay more attention to the natural world, where friction is still very much alive-and-well, thank you:

“Nature is the ultimate optimizer, having run an endless slate of A/B tests over billions of years at scale. And in nature, friction and inconvenience have stood the test of time. Not only do they remain in abundance, but they’ve proven themselves critical. Nature understands the power of friction while we have become blind to it.”

A couple weeks ago, I wrote about Yerkes-Dodson law; which states that there can be too much of a good thing – or, in this case – too little of a supposedly bad thing. According to a 2012 study, when it comes to assigning value, we actually appreciate a little friction. It’s known as the IKEA effect. There is a sweet spot for optimal effort. Too much and we get frustrated. Too little and we feel that it was too easy. When it’s just right, we have a crappy set of shelves that we love more than we should because we had to figure out how to put them together.

Weaver feels the same is true for tech.  As examples, he points to Amazon’s Dash smart button and Facebook’s Frictionless Sharing. In the first case, Amazon claims the need has been eliminated by voice-activated shopping on Alexa. In the second case, we had legitimate privacy concern. But Weaver speculates that perhaps both things just moved a little too fast for our comfort, removing our sense of control. We need a little bit of friction in the system so we feel we can apply the brakes when required.

If we eliminate too much friction, we’ll slip over that hump into not valuing the tech enabled experiences we’re having. He cites the 2018 World Happiness Report which has been tracking our satisfaction with live on a global basis for over a decade. In that time, despite our tech capabilities increasing exponentially, our happiness has flatlined.

I have issues with his statistical logic – there is a bushel basket full of confounding factors in the comparison he’s trying to make – but I generally agree with Weaver’s hypothesis. We do need some friction in our lives. It applies the brakes to our instincts. It forces us to appreciate the here and now that we’re rushing through. It opens the door to serendipity and makes allowances for savouring.

In the end, we may need a little friction in our lives to appreciate what it means to be human.

 

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

It’s the Fall that’s Gonna Kill You

Butch: I’ll jump first.
Sundance: Nope.
Butch: Then you jump first.
Sundance: No, I said!
Butch: What’s the matter with you?!
Sundance: I can’t swim!
Butch:  Why, you crazy — the fall’ll probably kill ya!

                                     Butch Cassidy and the Sundance Kid – 1969

Last Monday, fellow Insider Steven Rosenbaum asked, “Is Advertising Obsolete?” The column and the post by law professor Ramsi Woodcock that prompted it were both interesting. So were the comments – which were by and large supportive of good advertising.

I won’t rehash Rosenbaum’s column, but it strikes me that we – being the collective we of the MediaPost universe – have been debating whether advertising is good or bad, relevant or obsolete, a trusted source of information or a con job for the ages and we don’t seem to be any closer to an answer.

The reason is that an advertisement is all of those things. But not at the same time.

I used to do behavioral research, specifically eye-tracking. And the end of an eye tracking study, you get what’s called an aggregate heat map. This is the summary of all the eyeball activity of all the participants over the entire duration of all interactions with whatever the image was. These were interesting, but personally I was fascinated with the time slices of the interactions. I found that often, you can learn more about behaviors by looking at who looked at what when. It was only when we looked at interactions on a second by second basis that we started to notice the really interesting patterns emerge. For example, when looking at a new website, men looked immediately at the navigation bar, whereas women were first drawn to the “hero” image. But if you looked at the aggregates – the sum of all scanning activities – the men and women’s images were almost identical.

I believe the same thing is happening when we try to pin down advertising. And it’s because advertising – and our attitudes towards it – change through the life cycle of a brand, or product, or company.

Our relationship with a product or brand can be represented by an inverted U chart, with the vertical axis being awareness/engagement and the horizontal axis being time. Like a zillion other things, our brain defines our relationship with a product or brand by a resource/reward algorithm. Much of human behavior can be attributed to a dynamic tension between opposing forces and this is no exception. Driving us to explore the new are cognitive forces like novelty seeking and changing expectations of utility while things like cognitive lock in and the endowment effect tend to keep us loyal. As we engage with a new product or brand, we climb up the first side of the inverted U. But nothing in nature continues on a straight line, much as every sales manager would love it to. At some point, our engagement will peak and we’ll get itchy feet to try something new. Then we start falling down the descent of the U. And it’s this fall that kills our acceptance of advertising.

2000px-HebbianYerkesDodson.svgThis inverted U shows up all the time in human behavior. We assume you can never have too much of a good thing, but this is almost never true. There’s even a law that defines this, known as the Yerkes-Dodson Law. Developed by psychologists Robert Yerkes and John Dodson in 1908, it plots performance against mental or physical arousal. Predictably, performance increases with how fully we’re engaged with whatever we’re doing – but only up to a point. Then, performance peaks and starts to decline into anxiety.

It’s also why TV show runners are getting smarter about ending a series just as they crest the top of the hump. Hard lessons about the dangers of the decline have been learned by the jumping of multiple sharks.

Our entire relationship with a brand or product is built on the foundation of this inverted U, so it should come as no surprise that our acceptance of advertising for said brand or product also has to be plotted on this same chart. Yet it seems to constantly comes as a surprise for the marketing teams. In the beginning, on the upslope of the upside-down U, we are seeking novelty, and an advertisement for something new fits the bill.

When the inevitable downward curve starts, the sales and marketing teams panic and respond by upping advertising. They do their best to maintain a straight up line, but it’s too late. The gap between their goals and audience acceptance continues to grow as one line is projected upwards and the other curves ever more steeply downwards. Eventually the message is received and the plug is pulled, but the damage has already been done.

When we look at advertising, we have to plot it against this ubiquitous U. And when we talk about advertising, we have to be more careful to define what we’re talking about. If we’re talking specifically, we will all be able to find examples of useful and even welcome ads. But when I talk about the broken contract of advertising, I speak in more general terms. In the digital compression of timelines, we are reaching the peak of advertising effectiveness faster than ever before. And when we hit the decline, we actively reject advertising because we can. We have other alternatives. This decline is dragging the industry down with it. Yes, we can all think of good ads, but the category is suffering from our evolving opinion which is increasingly being formed on the downside of the U.

 

 

Beijing’s Real-Life Black Mirror Episode

In another example of fact mirroring fiction (and vice versa), the Chinese Government has been experimenting since 2014 with a social credit program that rewards good behavior and punishes bad. Next year, it becomes mandatory. On hearing this news, many drew a parallel to the “Nosedive” episode from the Netflix series Black Mirror. The comparison was understandable. Both feature a universal system where individuals are assigned a behavior-based score has real-life consequences. And it is indeed ominous when a government known for being “Big Brother” is unveiling such a program. But this misses the point of Nosedive. Black Mirror creator and writer Charlie Brooker wasn’t worried about Big Brother. He was worried about you and I – and our behavior in the grips of such a program.

In Brooker’s world, there was no overseer of the program. It was an open market of social opinion. If people didn’t like you, you got docked points. If they did, you got extra points. It was like a Yelp for everyone, everywhere. And just like a financial credit score, your social credit score was factored into what kind of house you could buy, what type of car you could rent and what seat you got on an airplane.

If we strip emotion out of it, this doesn’t sound like a totally stupid idea. We like ratings. They work wonderfully as a crowd-sourced reference in an open market. And every day I’m reminded that there are a lot of crappy people out there. It sounds like this might be a plausible solution. It reminds me of a skit the comedian Gallagher used to do in the 80’s about stupid drivers. Everyone would get one of those suction dart guns. If you saw a jerk on the road, you could just shoot a dart at his car. Once he had collected a dozen or so darts, the cops could give him a ticket for being an idiot. This is the same idea, with digital technology applied.

But the genius of Brooker and the Black Mirror is to take an aspect of technology which actually makes sense, factor in human behavior and then take it to the darkest place possible. And that’s what he did in Nosedive. It’s about a character named Lacie – played by Bryce Dallas Howard – who’s idea of living life is trying to make everyone happy.  Her goal is immediately quantified and given teeth by a universal social credit score – a la Yelp – where your every action is given a score out of 5. This gets rolled up into your overall score. At the beginning of the episode, Lacie’s score is respectable but a little shy of the top rung scores enjoyed by the socially elite.

But here’s the Black Mirror twist. It turns out you can’t be elite and still be a normal person. It’s another side of the social influencer column I wrote last week. The only way you can achieve the highest scores is to become obsessed with them. Lacie – who is a pretty good person – finds the harder she tries, the faster her score goes into a nose dive – hence the name of the episode.

As Brooker explains, “Everyone’s a little tightened and false, because everyone’s terrified of being marked down – the consequences of that are unpleasant. So, basically, it’s the world we live in.”

China is taking more of a big brother/big data approach. Rather than relying exclusively on a social thumbs up or thumbs down, they’re crunching data from multiple sources to come up a algorithmically derived score. High scores qualify you for easy loans, better seats on planes, faster check ins at hotels and fast-tracked Visa applications. Bad scores mean you can’t book an airline ticket, get that promotion you’ve been hoping for or leave the country.  Rogier Creemers – an academic from Leiden University who is following China’s implementation of the program – explains, “I think the best way to understand the system is as a sort of bastard love child of a loyalty scheme.”

Although participation in the program is still voluntary (until 2020) an article on Wired published in 2017 hinted that Chinese society is already falling down the same social rabbit hole envisioned by Booker. “Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch.”

Personally, the last thing I would want is the government of China tracking my every move and passing judgement on my social worthiness. But even without that, I’m afraid Charlie Brooker would be right. Social credit would become just one more competitive hierarchy. And we’d do whatever it takes – good or bad – to get to the top.