Don’t Be So Quick to Eliminate Friction

If you have the mind of an engineer, you hate friction. When you worship at the altar of optimization, friction is something to be ruthlessly eliminated – squeezed out of the equation. Friction equals inefficiency. It saps the energy out of our efforts.  It’s what stands between reality and a perfect market, where commerce theoretically slides effortlessly between participants. Much of what we call tech today is optimized with the goal of eliminating friction.

But there’s another side of friction. And perhaps we shouldn’t be too quick to eliminate it.  Without friction, there would be no traction, so you wouldn’t be able to walk. Your car would have no brakes. Nails, bolts, screws, glue and tape wouldn’t work. Without friction, there would be nothing to keep the world together.

And in society, it’s friction that slows us down and helps us smell the roses. That’s because another word for friction – when we talk about our experiential selves – is savouring.

Take conversations, for instance. A completely efficient, friction free conversation would be pretty damn boring. It would get the required information from participant A to participant B – and vice versa – in the minimum number of words. There would be no embellishment, no nuance, no humanity. It would not be a conversation we would savour.

Savouring is all about slowing down. According to Maggie Pitts, a professor at the University of Arizona who studies how we savour conversations, “Savouring is prolonging, extending, and lingering in a positive or pleasant feeling.” And you can’t prolong anything without friction.

But what about friction in tech itself?  As I said before, the rule of thumb in tech is to eliminate as much friction as possible. But can the elimination of friction go too far? Product designer Jesse Weaver says yes. In an online essay, he says we friction-obsessed humans should pay more attention to the natural world, where friction is still very much alive-and-well, thank you:

“Nature is the ultimate optimizer, having run an endless slate of A/B tests over billions of years at scale. And in nature, friction and inconvenience have stood the test of time. Not only do they remain in abundance, but they’ve proven themselves critical. Nature understands the power of friction while we have become blind to it.”

A couple weeks ago, I wrote about Yerkes-Dodson law; which states that there can be too much of a good thing – or, in this case – too little of a supposedly bad thing. According to a 2012 study, when it comes to assigning value, we actually appreciate a little friction. It’s known as the IKEA effect. There is a sweet spot for optimal effort. Too much and we get frustrated. Too little and we feel that it was too easy. When it’s just right, we have a crappy set of shelves that we love more than we should because we had to figure out how to put them together.

Weaver feels the same is true for tech.  As examples, he points to Amazon’s Dash smart button and Facebook’s Frictionless Sharing. In the first case, Amazon claims the need has been eliminated by voice-activated shopping on Alexa. In the second case, we had legitimate privacy concern. But Weaver speculates that perhaps both things just moved a little too fast for our comfort, removing our sense of control. We need a little bit of friction in the system so we feel we can apply the brakes when required.

If we eliminate too much friction, we’ll slip over that hump into not valuing the tech enabled experiences we’re having. He cites the 2018 World Happiness Report which has been tracking our satisfaction with live on a global basis for over a decade. In that time, despite our tech capabilities increasing exponentially, our happiness has flatlined.

I have issues with his statistical logic – there is a bushel basket full of confounding factors in the comparison he’s trying to make – but I generally agree with Weaver’s hypothesis. We do need some friction in our lives. It applies the brakes to our instincts. It forces us to appreciate the here and now that we’re rushing through. It opens the door to serendipity and makes allowances for savouring.

In the end, we may need a little friction in our lives to appreciate what it means to be human.

 

The Social Acceptance of Siri

There was a time, not too long ago, when I did a fairly exhaustive series of posts on the acceptance of technology. The psychology of how and when we adopted disruptive tech fascinated me. So Laurie Sullivan’s article on how more people are talking to their phone caught my eye.

If you look at tech acceptance, there are a bucket full of factors you have to consider. Utility, emotions, goals, ease of use, cost and our own attitudes all play a part. But one of the biggest factors is social acceptance. We don’t want to look like a moron in front of friends and family. It was this, more than anything else, that killed Google Glass the first time around. Call it the Glasshole factor.

So, back to Laurie’s article and the survey she referred to in it. Which shifts in the social universe are making it more acceptable to shoot the shit with Siri?

The survey has been done for the last three years by Stone Temple, so we’re starting to see some emerging trends. And here are the things that caught my attention. First of all, the biggest shifts from 2017 to 2019, in terms of percentage, are: at the gym, in Public Restrooms and in the Theatre. Usage at home has actually slipped a little (one might assume that these conversations have migrated to Alexa and other home-based digital assistants). If we’re looking at acceptance of technology and the factors driving it, one thing jumps out from the survey. All the shifts are to do with how comfortable we feel talking to our phone in publicly visible situations. There is obviously a moving threshold of acceptability here.

As I mentioned, the three social “safe zones” – those instances where we wouldn’t be judged for speaking to our phones – have shown little movement in the last three years. These are “Home Alone”, “Home with Friends” (public but presumably safe from social judgment), and “Office Alone.” As much as possible in survey-based research, this isolates the social factor from all the other variables rather nicely and shows its importance in our collective jumping on the voice technology band wagon.

This highlights an important lesson is acceptance of new technologies: you have to budget in the time required for society to absorb and accept new technologies. The more that the technology will be utilized in visibly social situations, the more time you need to budget. Otherwise, the tech will only be adopted by a tiny group of socially obtuse techno-weenies and will be stranded on the wrong side of the bleeding edge. As technology becomes more personal and tags along with us in more situations, the designers and marketers of that tech will have to understand this.

This places technology acceptance in a whole new ball park. As the tech we use increasingly becomes part of our own social facing brand, our carefully constructed personas and the social norms we have in place become key factors that determine the pace of acceptance.

This becomes a delicate balancing act. How do you control social acceptance? As an example, let’s take out one of my favorite marketing punching bags – influencer marketing – and see if we could accelerate acceptance by seeding tech acceptance with a few key social connectors. That same strategy failed miserably when it came to promoting Google Glass to the public. And there’s a perfectly irrational reason for it. It has nothing to do with rational stuff like use cases, aesthetics or technology. It had to do with Google picking the wrong influencers – the so-called Google Glass Explorers. As a group, they tended to be tech-obsessed, socially awkward and painfully uncool. They were the people you avoid getting stuck in the corner with at a party because you just aren’t up for a 90-minute conversation on the importance of regular hard drive hygiene. No one wants to be them.

If this survey tells us anything, it tells us that – sometimes – you just have to hope and wait. Ever since Everett Rogers first sketched it out in 1962, we’ve known that innovation diffusion happens on a bell curve. Some innovations get stranded on the upside of the slope and wither away to nothingness while some make it over the hump and become part of our everyday lives. Three years ago, there were certainly people talking to their phones on buses, in gyms and at movie theatres. They didn’t care if they were judged for it. But most of us did care. Today, apparently, the social stigma has disappeared for many of us. We were just waiting for the right time – and the right company.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

It’s the Fall that’s Gonna Kill You

Butch: I’ll jump first.
Sundance: Nope.
Butch: Then you jump first.
Sundance: No, I said!
Butch: What’s the matter with you?!
Sundance: I can’t swim!
Butch:  Why, you crazy — the fall’ll probably kill ya!

                                     Butch Cassidy and the Sundance Kid – 1969

Last Monday, fellow Insider Steven Rosenbaum asked, “Is Advertising Obsolete?” The column and the post by law professor Ramsi Woodcock that prompted it were both interesting. So were the comments – which were by and large supportive of good advertising.

I won’t rehash Rosenbaum’s column, but it strikes me that we – being the collective we of the MediaPost universe – have been debating whether advertising is good or bad, relevant or obsolete, a trusted source of information or a con job for the ages and we don’t seem to be any closer to an answer.

The reason is that an advertisement is all of those things. But not at the same time.

I used to do behavioral research, specifically eye-tracking. And the end of an eye tracking study, you get what’s called an aggregate heat map. This is the summary of all the eyeball activity of all the participants over the entire duration of all interactions with whatever the image was. These were interesting, but personally I was fascinated with the time slices of the interactions. I found that often, you can learn more about behaviors by looking at who looked at what when. It was only when we looked at interactions on a second by second basis that we started to notice the really interesting patterns emerge. For example, when looking at a new website, men looked immediately at the navigation bar, whereas women were first drawn to the “hero” image. But if you looked at the aggregates – the sum of all scanning activities – the men and women’s images were almost identical.

I believe the same thing is happening when we try to pin down advertising. And it’s because advertising – and our attitudes towards it – change through the life cycle of a brand, or product, or company.

Our relationship with a product or brand can be represented by an inverted U chart, with the vertical axis being awareness/engagement and the horizontal axis being time. Like a zillion other things, our brain defines our relationship with a product or brand by a resource/reward algorithm. Much of human behavior can be attributed to a dynamic tension between opposing forces and this is no exception. Driving us to explore the new are cognitive forces like novelty seeking and changing expectations of utility while things like cognitive lock in and the endowment effect tend to keep us loyal. As we engage with a new product or brand, we climb up the first side of the inverted U. But nothing in nature continues on a straight line, much as every sales manager would love it to. At some point, our engagement will peak and we’ll get itchy feet to try something new. Then we start falling down the descent of the U. And it’s this fall that kills our acceptance of advertising.

2000px-HebbianYerkesDodson.svgThis inverted U shows up all the time in human behavior. We assume you can never have too much of a good thing, but this is almost never true. There’s even a law that defines this, known as the Yerkes-Dodson Law. Developed by psychologists Robert Yerkes and John Dodson in 1908, it plots performance against mental or physical arousal. Predictably, performance increases with how fully we’re engaged with whatever we’re doing – but only up to a point. Then, performance peaks and starts to decline into anxiety.

It’s also why TV show runners are getting smarter about ending a series just as they crest the top of the hump. Hard lessons about the dangers of the decline have been learned by the jumping of multiple sharks.

Our entire relationship with a brand or product is built on the foundation of this inverted U, so it should come as no surprise that our acceptance of advertising for said brand or product also has to be plotted on this same chart. Yet it seems to constantly comes as a surprise for the marketing teams. In the beginning, on the upslope of the upside-down U, we are seeking novelty, and an advertisement for something new fits the bill.

When the inevitable downward curve starts, the sales and marketing teams panic and respond by upping advertising. They do their best to maintain a straight up line, but it’s too late. The gap between their goals and audience acceptance continues to grow as one line is projected upwards and the other curves ever more steeply downwards. Eventually the message is received and the plug is pulled, but the damage has already been done.

When we look at advertising, we have to plot it against this ubiquitous U. And when we talk about advertising, we have to be more careful to define what we’re talking about. If we’re talking specifically, we will all be able to find examples of useful and even welcome ads. But when I talk about the broken contract of advertising, I speak in more general terms. In the digital compression of timelines, we are reaching the peak of advertising effectiveness faster than ever before. And when we hit the decline, we actively reject advertising because we can. We have other alternatives. This decline is dragging the industry down with it. Yes, we can all think of good ads, but the category is suffering from our evolving opinion which is increasingly being formed on the downside of the U.

 

 

Beijing’s Real-Life Black Mirror Episode

In another example of fact mirroring fiction (and vice versa), the Chinese Government has been experimenting since 2014 with a social credit program that rewards good behavior and punishes bad. Next year, it becomes mandatory. On hearing this news, many drew a parallel to the “Nosedive” episode from the Netflix series Black Mirror. The comparison was understandable. Both feature a universal system where individuals are assigned a behavior-based score has real-life consequences. And it is indeed ominous when a government known for being “Big Brother” is unveiling such a program. But this misses the point of Nosedive. Black Mirror creator and writer Charlie Brooker wasn’t worried about Big Brother. He was worried about you and I – and our behavior in the grips of such a program.

In Brooker’s world, there was no overseer of the program. It was an open market of social opinion. If people didn’t like you, you got docked points. If they did, you got extra points. It was like a Yelp for everyone, everywhere. And just like a financial credit score, your social credit score was factored into what kind of house you could buy, what type of car you could rent and what seat you got on an airplane.

If we strip emotion out of it, this doesn’t sound like a totally stupid idea. We like ratings. They work wonderfully as a crowd-sourced reference in an open market. And every day I’m reminded that there are a lot of crappy people out there. It sounds like this might be a plausible solution. It reminds me of a skit the comedian Gallagher used to do in the 80’s about stupid drivers. Everyone would get one of those suction dart guns. If you saw a jerk on the road, you could just shoot a dart at his car. Once he had collected a dozen or so darts, the cops could give him a ticket for being an idiot. This is the same idea, with digital technology applied.

But the genius of Brooker and the Black Mirror is to take an aspect of technology which actually makes sense, factor in human behavior and then take it to the darkest place possible. And that’s what he did in Nosedive. It’s about a character named Lacie – played by Bryce Dallas Howard – who’s idea of living life is trying to make everyone happy.  Her goal is immediately quantified and given teeth by a universal social credit score – a la Yelp – where your every action is given a score out of 5. This gets rolled up into your overall score. At the beginning of the episode, Lacie’s score is respectable but a little shy of the top rung scores enjoyed by the socially elite.

But here’s the Black Mirror twist. It turns out you can’t be elite and still be a normal person. It’s another side of the social influencer column I wrote last week. The only way you can achieve the highest scores is to become obsessed with them. Lacie – who is a pretty good person – finds the harder she tries, the faster her score goes into a nose dive – hence the name of the episode.

As Brooker explains, “Everyone’s a little tightened and false, because everyone’s terrified of being marked down – the consequences of that are unpleasant. So, basically, it’s the world we live in.”

China is taking more of a big brother/big data approach. Rather than relying exclusively on a social thumbs up or thumbs down, they’re crunching data from multiple sources to come up a algorithmically derived score. High scores qualify you for easy loans, better seats on planes, faster check ins at hotels and fast-tracked Visa applications. Bad scores mean you can’t book an airline ticket, get that promotion you’ve been hoping for or leave the country.  Rogier Creemers – an academic from Leiden University who is following China’s implementation of the program – explains, “I think the best way to understand the system is as a sort of bastard love child of a loyalty scheme.”

Although participation in the program is still voluntary (until 2020) an article on Wired published in 2017 hinted that Chinese society is already falling down the same social rabbit hole envisioned by Booker. “Higher scores have already become a status symbol, with almost 100,000 people bragging about their scores on Weibo (the Chinese equivalent of Twitter) within months of launch.”

Personally, the last thing I would want is the government of China tracking my every move and passing judgement on my social worthiness. But even without that, I’m afraid Charlie Brooker would be right. Social credit would become just one more competitive hierarchy. And we’d do whatever it takes – good or bad – to get to the top.

Influencer Marketing’s Downward Ethical Spiral

One of the impacts of our increasing rejection of advertising is that advertisers are becoming sneakier in presenting advertising that doesn’t look like advertising. One example is Native advertising. Another is influencer marketing. I’m not a big fan of either. I find native advertising mildly irritating. But I have bigger issues with influencer marketing.

Case in point: Taytum and Oakley Fisher. They’re identical twins, two years old and have 2.4 million followers on Instagram. They are adorable. They’re also expensive. A single branded photo on their feed goes for sums in the five-figure range. Of course, “they” are only two and have no idea what’s going on. This is all being stage managed behind the scenes by their parents, Madison and Kyler.

The Fishers are not an isolated example. According to an article on Fast Company, adorable kids – especially twins –  are a hot segment in the predicted 5 to 10 billion dollar Influencer market. Influencer management companies like God and Beauty are popping up. In a multi-billion dollar market, there are a lot of opportunities for everyone to make a quick buck. And the bucks get bigger when the “stars” can actually remember their lines. Here’s a quote from the Fast Company article:

“The Fishers say they still don’t get many brand deals yet, because the girls can’t really follow directions. Once they’re old enough to repeat what their parents (and the brands paying them) want, they could be making even more.”

Am I the only one that finds this carrying the whiff of moral repugnance?

If so, you might say, “what’s the harm?” The audience is obviously there. It works. Taytum and Oakley appear to be having fun, according to their identical grins. It’s just Gord being in a pissy mood again.

Perhaps. But I think there’s more going on here than we see on the typical Instagram feed.

One problem is transparency – or lack of it. Whether you agree with traditional advertising or not, at least it happens in a well-defined and well-lit marketplace. There is transparency into the fundamental exchange: consumer attention for dollars. It is an efficient and time-tested market.  There are metrics in place to measure the effectiveness of this exchange.

But when advertising attempts to present itself as something other than advertising, it slips from a black and white transaction to something lurking in the darkness colored in shades of grey. The whole point of influencer marketing is to make it appear that these people are genuine fans of these products, so much so that they can’t help evangelizing them through their social media feeds. This – of course – is bullshit. Money is paid for each one of these “genuine” tweets or posts. Big money. In some cases, hundreds of thousands of dollars. But that all happens out of sight and out of mind. It’s hidden, and that makes it an easy target for abuse.

But there is more than just a transactional transparency problem here. There is also a moral one. By becoming an influencer, you are actually becoming the influenced – allowing a brand to influence who you are, how you act, what you say and what you believe in. The influencer goes in believing that they are in control and the brand is just coming along for the ride. This is – again – bullshit. The minute you go on the payroll, you begin auctioning off your soul to the highest bidder. Amena Khan and Munroe Bergdorf both discovered this. The two influencers were cut for L’Oreal’s influencer roster by actually tweeting what they believed in.

The façade of influencer marketing is the biggest problem I have with it. It claims to be authentic and it’s about as authentic as pro wrestling – or Mickey Rourke’s face. Influencer marketing depends on creating an impossibly shiny bubble of your life filled with adorable families, exciting getaways, expensive shoes and the perfect soymilk latte. No real life can be lived under this kind of pressure. Influencer marketing claims to be inspirational, but it’s actually aspirational at the basest level. It relies on millions of us lusting after a life that is not real – a life where “all the women are strong, all the men are good-looking, and all the children are above average.”

Or – at least – all the children are named Taytum or Oakley.

 

Reality Vs Meta-Reality

“I know what I like, and I like what I know;”
Genesis

I watched the Grammys on Sunday night. And as it turned out, I didn’t know what I liked. And I thought I liked what I knew. But by the time I wrote this column (on Monday after the Grammys) I had changed my mind.

And it was all because of the increasing gap between what is real, and what is meta-real.

Real is what we perceive with our senses at the time it happens. Meta-real is how we reshape reality after the fact and then preserve it for future reference. And thanks to social media, the meta-real is a booming business.

Nobel laureate Daniel Kahneman first explored this with his work on the experiencing self and the remembering self. In a stripped-down example, imagine two scenarios. Scenario 1 has your hand immersed for 60 seconds in ice cold water that causes a moderate amount of pain. Scenario 2 has your hand immersed for 90 seconds. The first 60 seconds you’re immersed in water at the same temperature as Scenario 1, but then you leave you hand immersed for an additional 30 seconds while the water is slowly warmed by 1 degree.

After going through both scenarios and being told you have to repeat one of them, which would you choose? Logically speaking, you should choose 1. While uncomfortable, you have the benefit of avoiding an extra 30 seconds of a slightly less painful experience. But for those that went through it, that’s not what happened. Eighty percent who noticed that the water got a bit warmer chose to redo Scenario 2.

It turns out that we have two mental biases that kick in when we remember something we experienced:

  1. Duration doesn’t count
  2. Only the peak (best or worst moment) and the end of the experience are registered.

This applies to a lot more than just cold-water experiments. It also holds true for vacations, medical procedures, movies and even the Grammys. Not only that, there is an additional layer of meta-analysis that shifts us even further from the reality we actually experienced.

After I watched the Grammys, I had my own opinion of which performances I liked and those I didn’t care for. But that opinion was a work in progress. On Monday morning, I searched for “Best moments of Grammys 2019.” Rather quickly, my opinion changed to conform with what I was reading. And those summaries were in turn based on an aggregate of opinions gleaned from social media. It was Wisdom of Crowds – applied retroactively.

The fact is that we don’t trust our own opinions. This is hardwired in us. Conformity is something the majority of us look for. We don’t want to be the only one in the room with a differing opinion. Social psychologist Solomon Asch proved this almost 70 years ago. The difference is that in the Asch experiment, conformity happened in the moment. Now, thanks to our digital environment where opinions on anything can be found at any time, conformity happens after the fact. We “sandbox” our own opinions, waiting until we can see if they match the social media consensus. For almost any event you can name, there is now a market for opinion aggregation and analysis. We take this “meta” data and reshape our own reality to match.

It’s not just the malleability of our reality that is at stake here. Our memories serve as guides for the future. They color the actions we take and the people we become. We evolved as conformists because that was a much surer bet for our survival than relying on our own experiences alone.  But might this be a case of a good thing taken too far? Are we losing too much confidence in the validity of our own thoughts and opinions?

I’m pretty sure doesn’t matter what Gord Hotchkiss thinks about the Grammys of 2019. But I fear there’s much more at stake here.