Memories Made by Media

If you said the year 1967 to me, the memory that would pop into my head would be of Haight-Ashbury (ground zero for the counterculture movement), hippies and the summer of love. In fact, that same memory would effectively stand in for the period 1967 to 1969. In my mind, those three years were variations on the theme of Woodstock, the iconic music festival of 1969.

But none of those are my memories. I was alive, but my own memories of that time are indistinct and fuzzy. I was only 6 that year and lived in Alberta, some 1300 miles from the intersection of Haight and Ashbury Streets, so I have discarded my own personal representative memories. The ones I have were all created by images that came via media.

The Swapping of Memories

This is an example of the two types of memories we have – personal or “lived” memories and collective memories. Collective memories are the memories we get from outside, either for other people or, in my example, from media. As we age, there tends to be a flow back and forth between these two types or memories, with one type coloring the other.

One group of academics proposed an hourglass model as a working metaphor to understand this continuous exchange of memories – with some flowing one way and others flowing the other.  Often, we’re not even aware of which type of memory we’re recalling, personal or collective. Our memories are notoriously bad at reflecting reality.

What is true, however, is that our personal memories and our collective memories tend to get all mixed up. The lower our confidence in our personal memories, the more we tend to rely on collective memories. For periods before we were born, we rely solely on images we borrow.

Iconic Memories

What is true for all memories, ours or the ones we borrow from others, is we put them through a process called “leveling and sharpening.” This is a type of memory consolidation where we throw out some of the detail that is not important to us – this is leveling – and exaggerate other details to make it more interesting – i.e. sharpening.

Take my borrowed memories of 1967, for example. There was a lot more happening in the world than whatever was happening in San Francisco during the Summer of Love, but I haven’t retained any of it in my representative memory of that year. For example, there was a military coup in Greece, the first successful human heart transplant, the creation of the Corporation for Public Broadcasting, a series of deadly tornadoes in Chicago and Typhoon Emma left 140,000 people homeless in the Philippines. But none of that made it into my memory of 1967.

We could call the memories we do keep as “iconic” – which simply means we chose symbols to represent a much bigger and more complex reality – like everything that happened in a 365 day stretch 5 and a half decades ago.

Mass Manufactured Memories

Something else happens when we swap our own personal memories for collective memories – we find much more commonality in our memories. The more removed we become from our own lived experiences, the more our memories become common property.

If I asked you to say the first thing that comes to mind about 2002, you would probably look back through your own personal memory store to see if there was anything there. Chances are it would be a significant event from your own life, and this would make it unique to you. If we had a group of 50 people in a room and I asked that question, I would probably end up with 50 different answers.

But if I asked that same group what the first thing is that comes to mind when I say the year 1967, we would find much more common ground. And that ground would probably be defined by how each of us identify ourselves. For some you might have the same iconic memory that I do – that of the Haight Ashbury and the Summer of Love. Others may have picked the Vietnam War as the iconic memory from that year. But I would venture to guess that in our group of 50, we would end up with only a handful of answers.

When Memories are Made of Media

I am taking this walk down Memory Lane because I want to highlight how much we rely on the media to supply our collective memories. This dependency is critical, because once media images are processed by us and become part of our collective memories, they hold tremendous sway over our beliefs. These memories become the foundation for how we make sense of the world.

This is true for all media, including social media. A study in 2018 (Birkner & Donk) found that “alternative realities” can be formed through social media to run counter to collective memories formed from mainstream media. Often, these collective memories formed through social media are polarized by nature and are adopted by outlier fringes to justify extreme beliefs and viewpoints. This shows that collective memories are not frozen in time but are malleable – continually being rewritten by different media platforms.

Like most things mediated by technology, collective memories are splintering into smaller and smaller groupings, just like the media that are instrumental in their formation.

Media: The Midpoint of the Stories that Connect Us

I’m in the mood for navel gazing: looking inward.

Take the concept of “media,” for instance. Based on the masthead above this post, it’s what this site — and this editorial section — is all about. I’m supposed to be on the “inside” when it comes to media.

But media is also “inside” — quite literally. The word means “middle layer,” so it’s something in between.

There is a nuance here that’s important. Based on the very definition of the word, it’s something equidistant from both ends. And that introduces a concept we in media must think about: We have to meet our audience halfway. We cannot take a unilateral view of our function.

When we talk about media, we have to understand what gets passed through this “middle layer.” Is it information? Well, then we have to decide what information is. Again, the etymology of the word “inform” shows us that informing someone is to “give form to their mind.” But that mind isn’t a blank slate or a lump of clay to be molded as we want. There is already “form” there. And if, through media, we are meeting them halfway, we have to know something about what that form may be.

We come back to this: Media is the midpoint between what we, the tellers,  believe, and what we want our audience to believe. We are looking for the shortest distance between those two points. And, as self-help author Patti Digh wrote, “The shortest distance between two people is a story.”

We understand the world through stories — so media has become the platform for the telling of stories. Stories assume a common bond between the teller and the listener. It puts media squarely in the middle ground that defines its purpose, the point halfway between us. When we are on the receiving end of a story, our medium of choice is the one closest to us, in terms of our beliefs and our world narrative. These media are built on common ideological ground.

And, if we look at a recent study that helps us understand how the brain builds models of the things around us, we begin to understand the complexity that lies within a story.

This study from the Max Planck Institute for Human Cognitive and Brain Sciences shows that our brains are constantly categorizing the world around us. And if we’re asked to recognize something, our brains have a hierarchy of concepts that it will activate, depending on the situation. The higher you go in the hierarchy, the more parts of your brain that are activated.

For example, if I asked you to imagine a phone ringing, the same auditory centers in your brain that activate when you actually hear the phone would kick into gear and give you a quick and dirty cognitive representation of the sound. But if I asked you to describe what your phone does for you in your life, many more parts of your brain would activate, and you would step up the hierarchy into increasingly abstract concepts that define your phone’s place in your own world. That is where we find the “story” of our phone.

As psychologist Robert Epstein  says in this essay, we do not process a story like a computer. It is not data that we crunch and analyze. Rather, it’s another type of pattern match, between new information and what we already believe to be true.

As I’ve said many times, we have to understand why there is such a wide gap in how we all interpret the world. And the reason can be found in how we process what we take in through our senses.

The immediate sensory interpretation is essentially a quick and dirty pattern match. There would be no evolutionary purpose to store more information than is necessary to quickly categorize something. And the fidelity of that match is just accurate enough to do the job — nothing more.

For example, if I asked you to draw a can of Coca-Cola from memory, how accurate do you think it would be? The answer, proven over and over again, is that it probably wouldn’t look much like the “real thing.”

That’s coming from one sense, but the rest of your senses are just as faulty. You think you know how Coke smells and tastes and feels as you drink it, but these are low fidelity tags that act in a split second to help us recognize the world around us. They don’t have to be exact representations because that would take too much processing power.

But what’s really important to us is our “story” of Coke. That was clearly shown in one of my favorite neuromarketing studies, done at Baylor University by Read Montague.

He and his team reenacted the famous Pepsi Challenge — a blind taste test pitting Coke against Pepsi. But this time, they scanned the participant’s brains while they were drinking. The researchers found that when Coke drinkers didn’t know what they were drinking, only certain areas of their brains activated, and it didn’t really matter if they were drinking Coke or Pepsi.

But when they knew they were drinking Coke, suddenly many more parts of the brain started lighting up, including the prefrontal cortex, the part of the brain that is usually involved in creating our own personal narratives to help us understand our place in the world.

And while the actual can of Coke doesn’t change from person to person, our Story of Coke can be an individual to us as our own fingerprints.

We in the media are in the business of telling stories. This post is a story. Everything we do is a story. Sometimes they successfully connect with others, and sometimes they don’t. But in order to make effective use of the media we chose as a platform, we must remember we can only take a story halfway. On the other end there is our audience, each of whom has their own narratives that define them. Media is the middle ground where those two things connect.

Our Disappearing Attention Spans

Last week, Mediapost Editor in Chief Joe Mandese mused about our declining attention spans. He wrote, 

“while in the past, the most common addictive analogy might have been opiates — as in an insatiable desire to want more — these days [consumers] seem more like speed freaks looking for the next fix.”

Mandese cited a couple of recent studies, saying that more than half of mobile users tend to abandon any website that takes longer than three seconds to load. That

“has huge implications for the entire media ecosystem — even TV and video — because consumers increasingly are accessing all forms of content and commerce via their mobile devices.”

The question that begs to be asked here is, “Is a short attention span a bad thing?” The famous comparison is that we are now more easily distracted than a goldfish. But does a shorter attention span negatively impact us, or is it just our brain changing to be a better fit with our environment?

Academics have been debating the impact of technology on our ability to cognitively process things for some time. Journalist Nicholas Carr sounded the warning in his 2010 book, “The Shallows,” where he wrote, 

” (Our brains are) very malleable, they adapt at the cellular level to whatever we happen to be doing. And so the more time we spend surfing, and skimming, and scanning … the more adept we become at that mode of thinking.”

Certainly, Carr is right about the plasticity of our brains. It’s one of the most advantageous features about them. But is our digital environment forever pushing our brains to the shallow end of the pool? Well, it depends. Context is important. One of the biggest factors in determining how we process the information we’re seeing is the device where we’re seeing it.

Back in 2010, Microsoft did a large-scale ethnographic study on how people searched for information on different devices. The researchers found those behaviors differed greatly depending on the platform being used and the intent of the searcher. They found three main categories of search behaviors:

  • Missions are looking for one specific answer (for example, an address or phone number) and often happen on a mobile device.
  • Excavations are widespread searches that need to combine different types of information (for example, researching an upcoming trip or major purchase). They are usually launched on a desktop.
  • Finally, there are Explorations: searching for novelty, often to pass the time. These can happen on all types of devices and can often progress through different devices as the exploration evolves. The initial search may be launched on a mobile device, but as the user gets deeper into the exploration, she may switch to a desktop.

The important thing about this research was that it showed our information-seeking behaviors are very tied to intent, which in turn determines the device used. So, at a surface level, we shouldn’t be too quick to extrapolate behaviors seen on mobile devices with certain intents to other platforms or other intents. We’re very good at matching a search strategy to the strengths and weaknesses of the device we’re using.

But at a deeper level, if Carr is right (and I believe he is) about our constant split-second scanning of information to find items of interest making permanent changes in our brains, what are the implications of this?

For such a fundamentally important question, there is a small but rapidly growing body of academic research that has tried to answer it. To add to the murkiness, many of the studies done contradict each other. The best summary I could find of academia’s quest to determine if “the Internet is making us stupid” was a 2015 article in academic journal The Neuroscientist.

The authors sum up by essentially saying both “yes” — and “no.” We are getting better at quickly filtering through reams of information. We are spending fewer cognitive resource memorizing things we know we can easily find online, which theoretically leaves those resources free for other purposes. Finally, for this post, I will steer away from commenting on multitasking, because the academic jury is still very much out on that one.

But the authors also say that 

“we are shifting towards a shallow mode of learning characterized by quick scanning, reduced contemplation and memory consolidation.”

The fact is, we are spending more and more of our time scanning and clicking. There are inherent benefits to us in learning how to do that faster and more efficiently. The human brain is built to adapt and become better at the things we do all the time. But there is a price to be paid. The brain will also become less capable of doing the things we don’t do as much anymore. As the authors said, this includes actually taking the time to think.

So, in answer to the question “Is the Internet making us stupid?,” I would say no. We are just becoming smart in a different way.

But I would also say the Internet is making us less thoughtful. And that brings up a rather worrying prospect.

As I’ve said many times before, the brain thinks both fast and slow. The fast loop is brutally efficient. It is built to get stuff done in a split second, without having to think about it. Because of this, the fast loop has to be driven by what we already know or think we know. Our “fast” behaviors are necessarily bounded by the beliefs we already hold. It’s this fast loop that’s in control when we’re scanning and clicking our way through our digital environments.

But it’s the slow loop that allows us to extend our thoughts beyond our beliefs. This is where we’ll find our “open minds,” if we have such a thing. Here, we can challenge our beliefs and, if presented with enough evidence to the contrary, willingly break them down and rebuild them to update our understanding of the world. In the sense-making loop, this is called reframing.

The more time we spend “thinking fast” at the expense of “thinking slow,” the more we will become prisoners to our existing beliefs. We will be less able to consolidate and consider information that lies beyond those boundaries. We will spend more time “parsing” and less time “pondering.” As we do so, our brains will shift and change accordingly.

Ironically, our minds will change in such a way to make it exceedingly difficult to change our minds.

The Joy of Listening to Older People

The older I get, the more I enjoy talking to people who have accumulated decades of life experience. I consider it the original social media: the sharing of personal oral histories.

People my age often become interested in their family histories. When you talk to these people, they always say the same thing: “I wish I had taken more time to talk to my grandparents when they were still alive.” No one has ever wished they had spent less time with Grandma and Grandpa.

In the hubris of youth, there seems to be the common opinion that there couldn’t be anything of interest in the past that stretches further than the day before yesterday.  When we’re young, we seldom look back. We live in the moment and are obsessed with the future.

This is probably as it should be. Most of our lives lie in front of us. But as we pass the middle mark of our own journey, we start to become more reflective. And as we do so, we realize that we’ve missed the opportunity to hear most of our own personal family histories from the people who lived it. Let’s call it ROMO: The Regret of Missing Out.

Let me give you one example. In our family, with Remembrance Day (the Canadian version of Veterans Day) fast approaching, one of my cousins asked if we knew of any family that served in World War I. I vaguely remembered that my great grandfather may have served, so I did some digging and eventually found all his service records.

I discovered that he enlisted to go overseas when he was almost 45 years old, leaving behind a wife and five children. He served as a private in the trenches in the Battle of the Somme and Vimy Ridge. He was gassed. He had at least four bouts of trench fever, which is transmitted by body lice.

As a result, he developed a debilitating soreness in his limbs and back that made it impossible for him to continue active duty. Two and a half years after he enlisted, this almost 50-year-old man was able to sail home to his wife and family.

I was able to piece this together from the various records and medical reports. But I would have given anything to be able to hear these stories from him.

Unfortunately, I never knew him. My mom was just a few years old when he died, a somewhat premature death that was probably precipitated by his wartime experience.

This was a story that fell through the cracks between the generations. And now it’s too late. It will remain mostly hidden, revealed only by the sparse information we can glean from a handful of digitized records.

It’s not easy to get most older people talking. They’re not used to people caring about their past or their stories. You have to start gently and tease it out of them.

But if you persist and show an eagerness to listen, eventually the barriers come down and the past comes tumbling out, narrated by the person who lived it. Trust me when I say there is nothing more worthwhile that you can do.

We tend to ignore old people because we just have too much going on in our own lives. But it kills me just a little bit inside when I see grandparents and grandchildren in the same room, the young staring at a screen and the old staring off into space because no one is talking to them.

The screen will always be there. But Grandma isn’t getting any younger. She has lived her life. And I guarantee that in the breadth and depth of that life, there are some amazing stories you should take some time to listen to.

The Difference Between a Right-Wing and Left-Wing Media Brain

I’ve been hesitating to write this column. But increasingly, everything I write and think about seems to come back to the same point – the ideological divide between liberals and conservatives. That divide is tearing the world apart. And technology seems to be accelerating the forces causing the rift, rather than reversing them.

First, a warning: I am a Liberal. That probably doesn’t come as a surprise to anyone who has read any of my columns, but I did want to put it out there. And the reason I feel that warning is required it that with this column, I’m diving into the dangerous waters – I’m going to be talking about the differences between liberal and conservative brains, particularly those brains that are working in the media space.

Last week, I talked about the evolution of media bias through two – and what seems increasingly likely – three impeachment proceedings. Mainstream media has historically had a left bias. In a longitudinal study of journalism,  two professors at University of Indiana – Lars Willnat and David Weaver – found that in 2012, just 7% of American journalists identified themselves as Republican, while 28% said they were Democrats. Over 50% said they were Independent, but I suspect this is more a statement on the professed objectivity of journalists than their actual political leanings. I would be willing to bet that those independents sway left far more often than they sway right.

So, it’s entirely fair to say that mainstream media does have liberal bias. The question is – why? Is it a premediated conspiracy or just a coincidental correlation? I believe the bias is actually self-selected. Those that choose to go into journalism have brains that work in a particular way – a way that is most often found in those that fall on the liberal end of the spectrum.

I first started putting this hypothesis together when I read the following passage in Robert Sapolsky’s book “Behave, The Biology of Humans at Our Best and Worst.” Sapolsky was talking about a growing number of studies looking at the cognitive differences between liberals and conservatives: “This literature has two broad themes. One is that rightists are relatively uncomfortable intellectually with ambiguity…The other is that leftists, well, think harder, have a greater capacity for what the political scientist Philip Tetlock of the University of Pennsylvania calls ‘integrative complexity’.”

Sapolsky goes on to differentiate these intellectual approaches, “conservatives start gut and stay gut; liberals go from gut to head.”

Going from “gut to head” is a pretty good quality for a journalist. In fact, you could say it’s their job description.

Sapolsky cites a number of studies he bases this conclusion on. In the abstract of one of these studies, the researchers note: “Liberals are more likely to process information systematically, recognize differences in argument quality, and to be persuaded explicitly by scientific evidence, whereas conservatives are more likely to process information heuristically, attend to message-irrelevant cues such as source similarity, and to be persuaded implicitly through evaluative conditioning. Conservatives are also more likely than liberals to rely on stereotypical cues and assume consensus with like-minded others.”

This is about as good a description of the differences between mainstream media and the alt-right media as I’ve seen. The researchers further note that, “Liberals score higher than conservatives on need for cognition and open-mindedness, whereas conservatives score higher than liberals on intuitive thinking and self-deception.”

That explains so much of the current situation we’re finding ourselves in. Liberals tend to be investigative journalists. Conservatives tend to be opinion columnists and pundits. One is using their head. The other is using their gut.

Of course, it’s not just the conservative media that rely on gut instinct. The Commander in Chief uses the same approach. In a 2016 article in the Washington Post, Marc Fisher probed Trump’s disdain for reading, “He said in a series of interviews that he does not need to read extensively because he reaches the right decisions “with very little knowledge other than the knowledge I [already] had, plus the words ‘common sense,’ because I have a lot of common sense and I have a lot of business ability.”

I have nothing against intuition. The same Post articles goes on to give examples of other presidents who relied on gut instinct (Fisher notes, however; that even when these are factored in, Trump is still an outlier). But when the stakes are as high as they are now, I prefer intuition combined with some research and objective evaluation.

We believe in the concept of equality and fairness, as we should. For that reason, I hesitate to put yet another wall between conservatives and liberals. But – in seeking answers to complex questions – I think we have to be open and honest about the things that make us different. There is a reason some of us are liberals and some of us are conservatives – our brains work differently*. And when those differences extend to our processing of our respective realities and the sources we turn to for information, we should be aware of them. We should take them into account in evaluating our media choices. We should go forward with open minds.

Unfortunately, I suspect I’m preaching to the choir. If you got this far in my column, you’re probably a liberal too.

* If you really want to dig further, check out the paper “Are Conservatives from Mars and Liberals from Venus?, Maybe Not So Much by Linda Skitka, one of the foremost researchers exploring this question.

Photos: Past, Present and Future

I was at a family reunion this past week. While there, my family did what families do at reunions: We looked at family photos.

In our case, our photographic history started some 110 years or so ago, with my great-great grandfather George and his wife Kezia. We have a stunning  picture of the couple, with Kezia wearing an ostrich feather hat.

George and Kezia Ching – Redondo Beach

At the time of the photo, George was an ostrich feather dyer in Hollywood, California. Apparently, there was a need for dyed ostrich feathers in turn-of-the-century Hollywood. That need didn’t last for long. The bottom fell out of the ostrich feather market and George and Kezia turned their sights north of the 49th, high-tailing it for Canada.

We’re a lucky family. We have four generations of photographic evidence of my mother’s forebears. They were solidly middle class and could afford the luxury of having a photo taken, even around the turn of the century. There were plenty of preserved family images that fueled many conversations and sparked memories as we gathered the clan.

What was interesting to me is that some 110 years after this memorable portrait was taken, we also took many new photos so we could remember this reunion in the future.  With all the technological change that has happened since George and Kezia posed in all their ostrich-feather-accessorized finery, the basic format of a two-dimensional visual representation was still our chosen medium for capturing the moment.

We talk about media a lot here at MediaPost — enough that it’s included in the headline of the post you’re reading. I think it’s worth a quick nod of appreciation to media that have endured for more than a century. Books and photos both fall into this category. Great-Great Grandfather George might be a bit flustered if he was looking at a book on a Kindle or viewing the photo on an iPhone, but the format of the medium itself would not be that foreign to him. He would be able to figure it out.

What dictates longevity in media? I think we have an inherent love for media that are a good match for both our senses and our capacity to imagine. Books give us the cognitive room to imagine worlds that no CGI effect has yet been able to match. And a photograph is still the most convenient way to render permanent the fleeting images that chase across our visual cortex. This is all the more true when those images are comprised of the faces we love. Like books, photos also give our minds the room to fill in the blanks, remembering the stories that go with the static image.

Compare a photo to something like a video. We could easily have taken videos to capture the moment. All of has had a pretty good video camera in our pocket. But we didn’t. Why not?

Again, we have to look at intended purpose at the moment of future consumption. Videos are linear. They force their own narrative arc upon us. We have to allocate the time required to watch the video to its conclusion. But a photo is randomly accessed. Our senses consume it at their own pace and prerogative, free of the restraints of the medium itself. For things like communal memories at a family reunion, a photo is the right match. There are circumstances where a video would be a better fit. This wasn’t one of them.

Our Family – 2019

There is one thing about photos that will be different moving forward. They are now in the digital domain, which means they can be stored with no restraints on space. It also means that we can take advantage of appended metadata. For the sake of my descendants, I hope this makes the bond between the photo and the stories a little more durable than what we currently deal with. If we were lucky, we had a quick notation on the back of an old photo to clarify the whos, whens and wheres.

A few of my more archivally inclined cousins started talking about the future generations of our family. When they remember us, what media would they be using? Would they be looking at the many selfies and digital shots that were taken in 2019 and try to remember who was that person between Cousin Dave and Aunt Lorna? What would be the platform used to store the photos? What will be the equivalent of the family album in 2119? How will they be archiving their own memories?

I suspect that if I were there, I wouldn’t be that surprised at the medium of choice.

Reality Vs Meta-Reality

“I know what I like, and I like what I know;”
Genesis

I watched the Grammys on Sunday night. And as it turned out, I didn’t know what I liked. And I thought I liked what I knew. But by the time I wrote this column (on Monday after the Grammys) I had changed my mind.

And it was all because of the increasing gap between what is real, and what is meta-real.

Real is what we perceive with our senses at the time it happens. Meta-real is how we reshape reality after the fact and then preserve it for future reference. And thanks to social media, the meta-real is a booming business.

Nobel laureate Daniel Kahneman first explored this with his work on the experiencing self and the remembering self. In a stripped-down example, imagine two scenarios. Scenario 1 has your hand immersed for 60 seconds in ice cold water that causes a moderate amount of pain. Scenario 2 has your hand immersed for 90 seconds. The first 60 seconds you’re immersed in water at the same temperature as Scenario 1, but then you leave you hand immersed for an additional 30 seconds while the water is slowly warmed by 1 degree.

After going through both scenarios and being told you have to repeat one of them, which would you choose? Logically speaking, you should choose 1. While uncomfortable, you have the benefit of avoiding an extra 30 seconds of a slightly less painful experience. But for those that went through it, that’s not what happened. Eighty percent who noticed that the water got a bit warmer chose to redo Scenario 2.

It turns out that we have two mental biases that kick in when we remember something we experienced:

  1. Duration doesn’t count
  2. Only the peak (best or worst moment) and the end of the experience are registered.

This applies to a lot more than just cold-water experiments. It also holds true for vacations, medical procedures, movies and even the Grammys. Not only that, there is an additional layer of meta-analysis that shifts us even further from the reality we actually experienced.

After I watched the Grammys, I had my own opinion of which performances I liked and those I didn’t care for. But that opinion was a work in progress. On Monday morning, I searched for “Best moments of Grammys 2019.” Rather quickly, my opinion changed to conform with what I was reading. And those summaries were in turn based on an aggregate of opinions gleaned from social media. It was Wisdom of Crowds – applied retroactively.

The fact is that we don’t trust our own opinions. This is hardwired in us. Conformity is something the majority of us look for. We don’t want to be the only one in the room with a differing opinion. Social psychologist Solomon Asch proved this almost 70 years ago. The difference is that in the Asch experiment, conformity happened in the moment. Now, thanks to our digital environment where opinions on anything can be found at any time, conformity happens after the fact. We “sandbox” our own opinions, waiting until we can see if they match the social media consensus. For almost any event you can name, there is now a market for opinion aggregation and analysis. We take this “meta” data and reshape our own reality to match.

It’s not just the malleability of our reality that is at stake here. Our memories serve as guides for the future. They color the actions we take and the people we become. We evolved as conformists because that was a much surer bet for our survival than relying on our own experiences alone.  But might this be a case of a good thing taken too far? Are we losing too much confidence in the validity of our own thoughts and opinions?

I’m pretty sure doesn’t matter what Gord Hotchkiss thinks about the Grammys of 2019. But I fear there’s much more at stake here.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

Branding in the Post Truth Age

If 2016 was nothing else – it was a watershed year for the concept of branding. In the previous 12 months, we saw a decoupling in the two elements we have always believed make up brands. As fellow Spinner Cory Treffiletti said recently:

“You have to satisfy the emotional quotient as well as the logical quotient for your brand.  If not, then your brand isn’t balanced, and is likely to fall flat on its face.”

But another Mediapost article highlighted an interesting trend in branding:

“Brands will strive to be ‘meticulously un-designed’ in 2017, according to WPP brand agency Brand Union.”

This, I believe, speaks to where brands are going. And depending on which side of the agency desk you happen to be on, this could either be good news or downright disheartening.

Let’s start with the logical side of branding. In their book Absolute Value, Itamar Simonson and Emanuel Rosen sounded the death knell for brands as a proxy for consumer information. Their premise, which I agree with, is that in a market that is increasingly moving towards perfect information, brands have lost their position of trust. We would rather rely on information that comes from non-marketing sources.

But brands have been aspiring to transcend their logical side for at least 5 decades now. This is the emotional side of branding that Treffiletti speaks of. And here I have to disagree with Simonson and Rosen. This form of branding appears to be very much alive and well, thank you. In fact, in the past year, this form of branding has upped the game considerably.

Brands, at their most potent, embed themselves in our belief systems. It is here, close to our emotional hearts, which mark the Promised Land for brands. Reid Montague’s famous Coke neuro-imaging experiment showed that for Coke drinkers, the brand became part of who they are. Research I was involved in showed that favored brands are positively responded to in a split second, far faster than the rational brain can act. We are hardwired to believe in brands and the more loved the brand, the stronger the reaction. So let’s look at beliefs for a moment.

Not all beliefs are created equal. Our beliefs have an emotional valence – some beliefs are defended more strongly than others. There is a hierarchy of belief defense. At the highest level are our Core beliefs; how we feel about things like politics and religion. Brands are trying to intrude on this core belief space. There has been no better example of this than the brand of Donald Trump.

Beliefs are funny things. From an evolutionary perspective, they’re valuable. They’re mental shortcuts that guide our actions without requiring us to think. They are a type of emotional auto-pilot. But they can also be quite dangerous for the same reason. We defend our beliefs against skeptics – and we defend our core beliefs most vigorously. Ration has nothing to do with it. It is this type of defense system that brands would love to build around themselves.

We like to believe our beliefs are unique to us – but in actual fact, beliefs also materialize out of our social connections. If enough people in our social network believe something is true, so will we. We will even create false memories and narratives to support the fiction. The evolutionary logic is quite simple. Tribes have better odds for survival than individuals, and our tribe will be more successful if we all think the same way about certain things. Beliefs create tribal cohesion.

So, the question is – how does a brand become a belief? It’s this question that possibly points the way in which brands will evolve in the Post-Truth future.

Up to now, brands have always been unilaterally “manufactured” – carefully crafted by agencies as a distillation of marketing messages and delivered to an audience. But now, brands are multilaterally “emergent” – formed through a network of socially connected interactions. All brands are now trying to ride the amplified waves of social media. This means they have to be “meme-worthy” – which really means they have to be both note and share-worthy. To become more amplifiable, brands will become more “jagged,” trying to act as catalysts for going viral. Branding messages will naturally evolve towards outlier extremes in their quest to be noticed and interacted with. Brands are aspiring to become “brain-worms” – wait, that’s not quite right – brands are becoming “belief-worms,” slipping past the rational brain if at all possible to lodge themselves directly in our belief systems. Brands want to be emotional shorthand notations that resonate with our most deeply held core beliefs. We have constructed a narrative of who we are and brands that fit that narrative are adopted and amplified.

It’s this version of branding that seems to be where we’re headed – a socially infectious virus that creates it’s own version of the truth and builds a bulwark of belief to defend itself. Increasingly, branding has nothing to do with rational thought or a quest for absolute value.