The Medium is the Message, Mr. President

Every day that Barack Obama was in the White House, he read 10 letters. Why letters? Because form matters. There’s still something about a letter. It’s so intimate. It uses a tactile medium. Emotions seem to flow easier through the use of cursive loops and sound of pen on paper. They balance between raw and reflective. As such, they may be an unusually honest glimpse into the soul of the writer. Obama seemed to get that. There was an entire team of hundreds of people at the White House that reviewed 10,000 letters a day and chose the 10 that made it to Obama, but the intent was to give an unfiltered snapshot of the nation at any given time. It was a mosaic of personal stories that – together – created a much bigger narrative.

Donald Trump doesn’t read letters. He doesn’t read much of anything. The daily presidential briefing has been dumbed down to media more fitting of the President’s 140 character long attention span. Trump likes to be briefed with pictures and videos. His information medium of choice? Cable TV. He has turned Twitter into his official policy platform.

Today, technology has exponentially multiplied the number of communication media we have available to us. And in that multiplicativity, Marshall McLuhan’s 50-year-old trope about the medium being the message seems truer than ever. The channels we chose – whether we’re on the sending or receiving end – carry their own inherent message. They say who we are, what we value, how we think. They intertwine with the message, determining how it will be interpreted.

I’m sad that letter writing is a dying art, but I’m also contributing to its demise. It’s been years since I’ve written a letter. I do write this column, which is another medium. But even here I’m mislabeling it. Technically this is a blog post. A column is a concept embedded in the medium of print – with its accompanying physical restriction of column inches. But I like to call it a column, because in my mind that carries its own message. A column comes with an implicit promise between you – the readers – any myself, the author. Columns are meant to be regularly recurring statements of opinion. I have to respect the fact that I remain accountable for this Tuesday slot that MediaPost has graciously given me. Week after week, I try to present something that I hope you’ll find interesting and useful enough to keep reading. I feel I owe that to you. To me, a “post” feels more ethereal – with less of an ongoing commitment between author and reader. It’s more akin to a drive-by-writing.

So that brings me to one of the most interesting things about letters and President Obama’s respect for them. They are meant to be a thoughtful medium between two people. The thoughts captured within are important enough to the writer that they’re put in print but they are intended just for the recipient. They are one of the most effective media ever created to ask for empathetic understanding from one person in particular. And that’s how Obama’s Office of Presidential Correspondence treated them. Each letter represented a person who felt strongly enough about something that they wanted to share it with the President personally. Obama used to read his ten letters at the end of the day, when he had time to digest and reflect. He often made notations in the margins asking pointed questions of his staff or requesting more investigation into the circumstances chronicled in a letter. He chose to set aside a good portion of each day to read letters because he believed in the message carried by the medium: Individuals – no matter who they are – deserve to be heard.

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

We’re Becoming Intellectually “Obese”

Humans are defined by scarcity. All our evolutionary adaptations tend to be built to ensure survival in harsh environments. This can sometimes backfire on us in times of abundance.

For example, humans are great at foraging. We have built-in algorithms that tell us which patches are most promising and when we should give up on the patch we’re in and move to another patch.

We’re also good at borrowing strategies that evolution designed for one purpose and applying them for another purpose. This is called exaptation. For example, we’ve exapted our food foraging strategies and applied them to searching for information in an online environment. We use these skills when we look at a website, conduct an online search or scan our email inbox. But as we forage for information – or food – we have to remember, this same strategy assumes scarcity, not abundance.

Take food for example. Nutritionally we have been hardwired by evolution to prefer high fat, high calorie foods. That’s because this wiring took place in an environment of scarcity, where you didn’t know where your next meal was coming from. High fat, high calorie and high salt foods were all “jackpots” if food was scarce. Eating these foods could mean the difference between life and death. So our brains evolved to send us a reward signal when we ate these foods. Subsequently, we naturally started to forage for these things.

This was all good when our home was the African savannah. Not so good when it’s Redondo Beach, there’s a fast food joint on every corner and the local Wal-Mart’s shelves are filled to overflowing with highly processed pre-made meals. We have “refined” food production to continually push our evolutionary buttons, gorging ourselves to the point of obesity. Foraging isn’t a problem here. Limiting ourselves is.

So, evolution has made humans good at foraging when things are scarce, but not so good at filtering in an environment of abundance. I suspect the same thing that happened with food is today happening with information.

Just like we are predisposed to look for food that is high in fats, salt and calories, we are drawn to information that:

  1. Leads to us having sex
  2. Leads to us having more than our neighbors
  3. Leads to us improving our position in the social hierarchy

All those things make sense in an evolutionary environment where there’s not enough to go around. But, in a society of abundance, they can cause big problems.

Just like food, for most of our history information was in short supply. We had to make decisions based on too little information, rather than too much. So most of our cognitive biases were developed to allow us to function in a setting where knowledge was in short supply and decisions had to be made quickly. In such an environment, these heuristic short cuts would usually end up working in our favor, giving us a higher probability of survival.

These evolutionary biases become dangerous as our information environment becomes more abundant. We weren’t built to rationally seek out and judiciously evaluate information. We were built to make decisions based on little or no knowledge. There is an override switch we can use if we wish, but it’s important to know that just like we’re inherently drawn to crappy food, we’re also subconsciously drawn to crappy information.

Whether or not you agree with the mainstream news sources, the fact is that there was a thoughtful editorial process, which was intended to improve the quality of information we were provided. Entire teams of people were employed to spend their days rationally thinking about gathering, presenting and validating the information that would be passed along to the public. In Nobel laureate Daniel Kahneman’s terminology, they were “thinking slow” about it. And because the transactional costs of getting that information to us was so high, there was a relatively strong signal to noise ratio.

That is no longer the case. Transactional costs have dropped to the point that it costs almost nothing to get information to us. This allows information providers to completely bypass any editorial loop and get it in front of us. Foraging for that information is not the problem. Filtering it is. As we forage through potential information “patches” – whether they be on Google, Facebook or Twitter – we tend to “think fast” – clicking on the links that are most tantalizing.

I would have never dreamed that having too much information could be a bad thing. But most of the cautionary columns that I’ve written about in the last few years all seem to have the same root cause – we’re becoming intellectually “obese.” We’ve developed an insatiable appetite for fast, fried, sugar-frosted information.

 

The Winona Ryder Effect

I was in the U.S. last week. It was my first visit in the Trump era.

It was weird. I was in California, so the full effect was muted, but I watched my tongue when meeting strangers. And that’s speaking as a Canadian, where watching your tongue is a national pastime. (As an aside, my US host, Lance, told me about a recent post on a satire site: “Concerned, But Not Wanting To Offend, Canada Quietly Plants Privacy Hedge Along Entire U.S. Border.” That’s so us.) There was a feeling that I had not felt before. As someone who has spent a lot of time in the US over the past decade or two, I felt a little less comfortable. There was a disconnect that was new to me.

Little did I know (because I’ve turned off my mobile CNN alerts since January 20th because I was slipping into depression) but just after I whisked through Sea-Tac airport with all the privilege that being a white male affords you, Washington Governor Jay Inslee would hold a press conference denouncing the new Trump Muslim ban in no uncertain terms. On the other side of the TSA security gates there were a thousand protesters gathering. I didn’t learn about this until I got home.

Like I said, it was weird.

And then there were the SAG awards on Sunday night. What the hell was the deal with Winona Ryder?

When the Stranger Things cast got on stage to accept their ensemble acting award, spokesperson David Harbour unleashed a fiery anti-Trump speech. But despite his passion and volume, it was Winona Ryder, standing beside him, that lit up the share button. And she didn’t say a word. Instead, her face contorted through a series of twenty-some different expressions in under 2 minutes. She became, as one Twitter post said, a “human gif machine.”

Now, by her own admission, Winona is fragile. She has battled depression and anxiety for much of her professional life. Maybe she was having a minor breakdown in front of the world. Or maybe this was a premeditated and choreographed social media master stroke. Either way, it says something about us.

The Stranger Things cast hadn’t even left the stage before the Twitterverse started spreading the Ryder meme. If you look at Google Trends there was a huge spike in searches for Winona Ryder starting right around 6:15 pm (PST) Sunday night. It peaked at 6:48 pm with a volume about 20 times that of queries for Ms. Ryder before the broadcast began.

It was David Harbour that delivered the speech Ryder was reacting to. The words were his, and while there was also a spike in searches for him coinciding with the speech, he didn’t come close to matching the viral popularity of the Ryder meme. At its peak, there were 5 searches for “Winona Ryder” for every search for “David Harbour.”

Ryder’s mugging was – premeditated or not – extremely meme-worthy. It was visual, it was over the top and – most importantly – it was a blank canvas we could project our own views on to. Winona didn’t give us any words, so we could fill in our own. We could use it to provide a somewhat bizarre exclamation point to our own views, expressed through social media.

As I was watching this happen, I knew this was going to go viral. Maybe it’s because it takes something pretty surreal to make a dent in an increasingly surreal world that leaves us numb. When the noise that surrounds us seems increasingly unfathomable, we need something like this to prick our consciousness and make us sit up and take notice. Then we hunker down again before we’re pummelled with the next bit of reality.

Let me give you one example.

As I was watching the SAG awards Sunday night, I was unaware that gunmen had opened fire on Muslim worshippers praying in a mosque in Quebec City. I only found out after I flicked through the channels after the broadcast ended. Today, as I write this, I now know that six are dead because someone hated Muslims that much. Canada also has extreme racism.

I find it hard to think about that. It’s easier to think about Winona Ryder’s funny faces. That’s not very noble, I know, but sometimes you have to go with what you’re actually able to wrap your mind around.

The Vanishing Value of the Truth

You know, the very powerful and the very stupid have one thing in common. They don’t alter their views to fit the facts. They alter the facts to fit the views.

Dr. Who, 1977

We might be in a period of ethical crisis. Or not. It’s tough to say. It really depends on what you believe. And that, in a nutshell, is the whole problem.

Take this past weekend for example. Brand new White House Press Secretary Sean Spicer, in his very first address, lied about the size of the inauguration crowd. Afterwards, a very cantankerous Kellyanne Conway defended the lying when confronted by Chuck Todd on Meet the Press. She said they weren’t lies…they were “Alternate Facts”.

http://www.nbcnews.com/widget/video-embed/860142147643

So, what exactly is an alternate fact? It’s something that is not a fact at all, but a narrative intended to be believed by a segment of the population, presumably to gain something from them.

To use a popular turn of phrase, it’s “Faking It til You Make It!”

And there you have the mantra of our society. We’re rewarding alternate facts on the theory that the end justifies the means. If we throw a blizzard of alternate facts out there that resonate with our audience’s beliefs, we’ll get what we want.

The Fake It Til You Make It syndrome is popping up everywhere. It’s always been a part of marketing and advertising. Arguably, the entire industry is based on alternate facts. But it’s also showing up in the development of new products and services, especially in the digital domain. While Eric Ries never espoused dishonesty in his book, The Lean Start Up, the idea of a Minimal Viable Product certainly lends itself to the principle of “faking it until you make it.” Agile development, in its purest sense, is about user feedback and rapid iteration, but humans being humans, it’s tough to resist the temptation to oversell each iteration, treading dangerously close to pitching “vaporware.” Then we hope like hell that the next development cycle will bridge some of the gap between reality and the alternate facts we sold the prospective customer.

I think we have to accept that our world may not place much value on the truth any more. It’s a slide that started about 100 years ago.

The Seven Habits of Highly Effective People author Stephen Covey reviewed the history of success literature in the US from the 1700’s forward. In the first 150 years of America’s history, all the success literature was about building character. Character was defined by words like integrity, kindness, virtue and honor. The most important thing was to be a good person.

Honesty was a fundamental underpinning of the Character Ethic. This coincided with the Enlightenment in Europe. Intellectually, this movement elevated truth above belief. Our modern concept of science gained its legs: “a branch of knowledge or study dealing with a body of facts or truths.” The concepts of honor and honesty were intertwined

But Covey noticed that things changed after the First World War. Success literature became preoccupied with the concept of personality. It was important to be likeable, extroverted, and influential. The most important thing was to be successful. Somehow, being truthful got lost in the noise generated by the rush to get rich.

Here’s the interesting thing about personality and character. Psychologists have found that your personality is resistant to change. Personality tends to work below the conscious surface and scripts play out without a lot of mindful intervention. You can read all the self-help books in the world and you probably won’t change your personality very much. But character can be worked on. Building character is an exercise in mindfulness. You have to make a conscious choice to be honest.

The other interesting thing about personality and character is how other people see you. We are wired to pick up on other people’s personalities almost instantly. We start picking up the subconscious cues immediately after meeting someone. But it takes a long time to determine a person’s character. You have to go through character-testing experiences before you can know if they’re really a good person. Character cuts to the core, where as personality is skin deep. But in this world of “labelability” (where we think we know people better than we actually do) we often substitute personality cues for character. If a person is outgoing, confident and fun, we believe them to be trustworthy, moral and honest.

This all adds up to some worrying consequences. If we have built a society where success is worth more than integrity, then our navigational bearings become dependent on context. Behavior becomes contingent on circumstances. Things that should be absolute become relative. Truth becomes what you believe is the most expedient and useful in a given situation.

Welcome to the world of alternate facts.

Branding in the Post Truth Age

If 2016 was nothing else – it was a watershed year for the concept of branding. In the previous 12 months, we saw a decoupling in the two elements we have always believed make up brands. As fellow Spinner Cory Treffiletti said recently:

“You have to satisfy the emotional quotient as well as the logical quotient for your brand.  If not, then your brand isn’t balanced, and is likely to fall flat on its face.”

But another Mediapost article highlighted an interesting trend in branding:

“Brands will strive to be ‘meticulously un-designed’ in 2017, according to WPP brand agency Brand Union.”

This, I believe, speaks to where brands are going. And depending on which side of the agency desk you happen to be on, this could either be good news or downright disheartening.

Let’s start with the logical side of branding. In their book Absolute Value, Itamar Simonson and Emanuel Rosen sounded the death knell for brands as a proxy for consumer information. Their premise, which I agree with, is that in a market that is increasingly moving towards perfect information, brands have lost their position of trust. We would rather rely on information that comes from non-marketing sources.

But brands have been aspiring to transcend their logical side for at least 5 decades now. This is the emotional side of branding that Treffiletti speaks of. And here I have to disagree with Simonson and Rosen. This form of branding appears to be very much alive and well, thank you. In fact, in the past year, this form of branding has upped the game considerably.

Brands, at their most potent, embed themselves in our belief systems. It is here, close to our emotional hearts, which mark the Promised Land for brands. Reid Montague’s famous Coke neuro-imaging experiment showed that for Coke drinkers, the brand became part of who they are. Research I was involved in showed that favored brands are positively responded to in a split second, far faster than the rational brain can act. We are hardwired to believe in brands and the more loved the brand, the stronger the reaction. So let’s look at beliefs for a moment.

Not all beliefs are created equal. Our beliefs have an emotional valence – some beliefs are defended more strongly than others. There is a hierarchy of belief defense. At the highest level are our Core beliefs; how we feel about things like politics and religion. Brands are trying to intrude on this core belief space. There has been no better example of this than the brand of Donald Trump.

Beliefs are funny things. From an evolutionary perspective, they’re valuable. They’re mental shortcuts that guide our actions without requiring us to think. They are a type of emotional auto-pilot. But they can also be quite dangerous for the same reason. We defend our beliefs against skeptics – and we defend our core beliefs most vigorously. Ration has nothing to do with it. It is this type of defense system that brands would love to build around themselves.

We like to believe our beliefs are unique to us – but in actual fact, beliefs also materialize out of our social connections. If enough people in our social network believe something is true, so will we. We will even create false memories and narratives to support the fiction. The evolutionary logic is quite simple. Tribes have better odds for survival than individuals, and our tribe will be more successful if we all think the same way about certain things. Beliefs create tribal cohesion.

So, the question is – how does a brand become a belief? It’s this question that possibly points the way in which brands will evolve in the Post-Truth future.

Up to now, brands have always been unilaterally “manufactured” – carefully crafted by agencies as a distillation of marketing messages and delivered to an audience. But now, brands are multilaterally “emergent” – formed through a network of socially connected interactions. All brands are now trying to ride the amplified waves of social media. This means they have to be “meme-worthy” – which really means they have to be both note and share-worthy. To become more amplifiable, brands will become more “jagged,” trying to act as catalysts for going viral. Branding messages will naturally evolve towards outlier extremes in their quest to be noticed and interacted with. Brands are aspiring to become “brain-worms” – wait, that’s not quite right – brands are becoming “belief-worms,” slipping past the rational brain if at all possible to lodge themselves directly in our belief systems. Brands want to be emotional shorthand notations that resonate with our most deeply held core beliefs. We have constructed a narrative of who we are and brands that fit that narrative are adopted and amplified.

It’s this version of branding that seems to be where we’re headed – a socially infectious virus that creates it’s own version of the truth and builds a bulwark of belief to defend itself. Increasingly, branding has nothing to do with rational thought or a quest for absolute value.