Reality Vs Meta-Reality

“I know what I like, and I like what I know;”
Genesis

I watched the Grammys on Sunday night. And as it turned out, I didn’t know what I liked. And I thought I liked what I knew. But by the time I wrote this column (on Monday after the Grammys) I had changed my mind.

And it was all because of the increasing gap between what is real, and what is meta-real.

Real is what we perceive with our senses at the time it happens. Meta-real is how we reshape reality after the fact and then preserve it for future reference. And thanks to social media, the meta-real is a booming business.

Nobel laureate Daniel Kahneman first explored this with his work on the experiencing self and the remembering self. In a stripped-down example, imagine two scenarios. Scenario 1 has your hand immersed for 60 seconds in ice cold water that causes a moderate amount of pain. Scenario 2 has your hand immersed for 90 seconds. The first 60 seconds you’re immersed in water at the same temperature as Scenario 1, but then you leave you hand immersed for an additional 30 seconds while the water is slowly warmed by 1 degree.

After going through both scenarios and being told you have to repeat one of them, which would you choose? Logically speaking, you should choose 1. While uncomfortable, you have the benefit of avoiding an extra 30 seconds of a slightly less painful experience. But for those that went through it, that’s not what happened. Eighty percent who noticed that the water got a bit warmer chose to redo Scenario 2.

It turns out that we have two mental biases that kick in when we remember something we experienced:

  1. Duration doesn’t count
  2. Only the peak (best or worst moment) and the end of the experience are registered.

This applies to a lot more than just cold-water experiments. It also holds true for vacations, medical procedures, movies and even the Grammys. Not only that, there is an additional layer of meta-analysis that shifts us even further from the reality we actually experienced.

After I watched the Grammys, I had my own opinion of which performances I liked and those I didn’t care for. But that opinion was a work in progress. On Monday morning, I searched for “Best moments of Grammys 2019.” Rather quickly, my opinion changed to conform with what I was reading. And those summaries were in turn based on an aggregate of opinions gleaned from social media. It was Wisdom of Crowds – applied retroactively.

The fact is that we don’t trust our own opinions. This is hardwired in us. Conformity is something the majority of us look for. We don’t want to be the only one in the room with a differing opinion. Social psychologist Solomon Asch proved this almost 70 years ago. The difference is that in the Asch experiment, conformity happened in the moment. Now, thanks to our digital environment where opinions on anything can be found at any time, conformity happens after the fact. We “sandbox” our own opinions, waiting until we can see if they match the social media consensus. For almost any event you can name, there is now a market for opinion aggregation and analysis. We take this “meta” data and reshape our own reality to match.

It’s not just the malleability of our reality that is at stake here. Our memories serve as guides for the future. They color the actions we take and the people we become. We evolved as conformists because that was a much surer bet for our survival than relying on our own experiences alone.  But might this be a case of a good thing taken too far? Are we losing too much confidence in the validity of our own thoughts and opinions?

I’m pretty sure doesn’t matter what Gord Hotchkiss thinks about the Grammys of 2019. But I fear there’s much more at stake here.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

Branding in the Post Truth Age

If 2016 was nothing else – it was a watershed year for the concept of branding. In the previous 12 months, we saw a decoupling in the two elements we have always believed make up brands. As fellow Spinner Cory Treffiletti said recently:

“You have to satisfy the emotional quotient as well as the logical quotient for your brand.  If not, then your brand isn’t balanced, and is likely to fall flat on its face.”

But another Mediapost article highlighted an interesting trend in branding:

“Brands will strive to be ‘meticulously un-designed’ in 2017, according to WPP brand agency Brand Union.”

This, I believe, speaks to where brands are going. And depending on which side of the agency desk you happen to be on, this could either be good news or downright disheartening.

Let’s start with the logical side of branding. In their book Absolute Value, Itamar Simonson and Emanuel Rosen sounded the death knell for brands as a proxy for consumer information. Their premise, which I agree with, is that in a market that is increasingly moving towards perfect information, brands have lost their position of trust. We would rather rely on information that comes from non-marketing sources.

But brands have been aspiring to transcend their logical side for at least 5 decades now. This is the emotional side of branding that Treffiletti speaks of. And here I have to disagree with Simonson and Rosen. This form of branding appears to be very much alive and well, thank you. In fact, in the past year, this form of branding has upped the game considerably.

Brands, at their most potent, embed themselves in our belief systems. It is here, close to our emotional hearts, which mark the Promised Land for brands. Reid Montague’s famous Coke neuro-imaging experiment showed that for Coke drinkers, the brand became part of who they are. Research I was involved in showed that favored brands are positively responded to in a split second, far faster than the rational brain can act. We are hardwired to believe in brands and the more loved the brand, the stronger the reaction. So let’s look at beliefs for a moment.

Not all beliefs are created equal. Our beliefs have an emotional valence – some beliefs are defended more strongly than others. There is a hierarchy of belief defense. At the highest level are our Core beliefs; how we feel about things like politics and religion. Brands are trying to intrude on this core belief space. There has been no better example of this than the brand of Donald Trump.

Beliefs are funny things. From an evolutionary perspective, they’re valuable. They’re mental shortcuts that guide our actions without requiring us to think. They are a type of emotional auto-pilot. But they can also be quite dangerous for the same reason. We defend our beliefs against skeptics – and we defend our core beliefs most vigorously. Ration has nothing to do with it. It is this type of defense system that brands would love to build around themselves.

We like to believe our beliefs are unique to us – but in actual fact, beliefs also materialize out of our social connections. If enough people in our social network believe something is true, so will we. We will even create false memories and narratives to support the fiction. The evolutionary logic is quite simple. Tribes have better odds for survival than individuals, and our tribe will be more successful if we all think the same way about certain things. Beliefs create tribal cohesion.

So, the question is – how does a brand become a belief? It’s this question that possibly points the way in which brands will evolve in the Post-Truth future.

Up to now, brands have always been unilaterally “manufactured” – carefully crafted by agencies as a distillation of marketing messages and delivered to an audience. But now, brands are multilaterally “emergent” – formed through a network of socially connected interactions. All brands are now trying to ride the amplified waves of social media. This means they have to be “meme-worthy” – which really means they have to be both note and share-worthy. To become more amplifiable, brands will become more “jagged,” trying to act as catalysts for going viral. Branding messages will naturally evolve towards outlier extremes in their quest to be noticed and interacted with. Brands are aspiring to become “brain-worms” – wait, that’s not quite right – brands are becoming “belief-worms,” slipping past the rational brain if at all possible to lodge themselves directly in our belief systems. Brands want to be emotional shorthand notations that resonate with our most deeply held core beliefs. We have constructed a narrative of who we are and brands that fit that narrative are adopted and amplified.

It’s this version of branding that seems to be where we’re headed – a socially infectious virus that creates it’s own version of the truth and builds a bulwark of belief to defend itself. Increasingly, branding has nothing to do with rational thought or a quest for absolute value.

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

A New Definition of Order

The first time you see the University of Texas – Austin’s AIM traffic management simulator in action, you can’t believe it would work. It shows the intersection of two 12 lane, heavily trafficked roads. There are no traffic lights, no stop signs, none of the traffic control systems we’re familiar with. Yet, traffic zips through with an efficiency that’s astounding. It appears to be total chaos, but no cars have to wait more than a few seconds to get through the intersection and there’s nary a collision in site. Not even a minor fender bender.

Oh, one more thing. The model depends on there being no humans to screw things up. All the vehicles are driverless. In fact, if just one of the vehicles had a human behind the wheel, the whole system would slow dramatically. The probability of an accident would also soar.

The thing about the simulation is that there is no order – or, at least – there is no order that is apparent to the human eye. The programmers at the U of T seem to recognize this with a tongue in cheek nod to our need for rationality. This particular video clip is called “insanity.” There are other simulation videos available at the project’s website, including ones where humans drive cars at intersections controlled by stoplights. These seem much saner and controlled. They’re also much less efficient. And likely more dangerous. No simulation that includes a human factor comes even close to matching the efficiency of the 100% autonomous option.

The AIM simulation is complex, but it isn’t complicated. It’s actually quite simple. As cars approach the intersection, they signal to a central “manager” if they want to turn or go straight ahead. The manager predicts whether the vehicles path will intersect another vehicle’s predicted path. If it does, it delays the vehicle slightly until the path is clear. That’s it.

The complexity comes in trying to coordinate hundreds of these paths at any given moment. The advantage the automated solution has is that it is in communication with all the vehicles. What appears chaotic to us is actually highly connected and coordinated. It’s fluid and organic. It has a lot in common with things like beehives, ant colonies and even the rhythms of our own bodies. It may not be orderly in our rational sense, but it is natural.

Humans don’t deal very well with complexity. We can’t keep track of more than a dozen or so variables at any one time. We categorize and “chunk” data into easily managed sets that don’t overwhelm our working memory. We always try to simplify things down by imposing order. We use heuristics when things get too complex. We make gut calls and guesses. Most of the time, it works pretty well, but this system gets bogged down quickly. If we pulled the family SUV into the intersection shown in the AIM simulation, we’d probably jam on the brakes and have a minor mental meltdown as driverless cars zipped by us.

Artificial intelligence, on the other hand, loves complexity. It can juggle amounts of disparate data that humans could never dream of managing. This is not to say that computers are more powerful than humans. It’s just that they’re better at different things. It’s referred to as Moravec’s Paradox: It’s relatively easy to program a computer to do what a human finds hard, but it’s really difficult to get it to do what humans find easy. Tracking the trajectories and coordinating the flow of hundreds of autonomous cars would fall into the first category. Understanding emotions would fall into the second category.

This matters because, increasingly, technology is creating a world that is more dynamic, fluid and organic. Order, from our human perspective, will yield to efficiency. And the fact is that – in data rich environments – machines will be much better at this than humans.   Just like our perspectives on driving, our notions of order and efficiency will have to change.

 

Ode to a Grecian Eurozone

comm-crisis I’d like to comment on the Greek debt crisis. But I don’t know anything about it. Zip..or, as they say in Athens – μηδέν. I do, however, know how to say zero in Greek, thanks to Google Translate. At least for the next few minutes. I also happen to know rather a lot right now about the Tour de France, how to wire RV batteries, how to balance pool chemicals, how to write obituaries and most of the plotlines for the Showtime series Homeland. I certainly know more about all those things than the average person. Tomorrow, I’ll probably know different stuff. And I will retain almost nothing. But if you ask me what in the world is happening right now, I’ll likely draw a blank. I’d say it’s all Greek to me, but a certain Mediapost columnist already stole that line. Damn you Bob Garfield!

I’m not really sure if I’m concerned about this. After all, I’m the one who has chosen not to watch the news for a long time. My various information sources feed me a steady diet of information, but it’s all been predetermined based on my interests. I’m in what they call a “filter bubble.” I’ve become my own news curator and somewhere along the line, I’ve completely filtered out anything to do with the Greek economy. It’s because I’m not really interested in the Greek economy, but I’m thinking maybe I should be.

(Incidentally, am I the only one who finds it a bit ironic that the word “economy” comes from – you guessed it – the Greek words for “house” and “management”)

The problem is that I have a limited attention span. My memory capacity is a little more voluminous, but there are definite limits to that, as well. To make matters worse, Google is making me intellectually lethargic. I don’t try as hard to remember stuff because I don’t have to. Why learn how to count to 10 in Greek when I can just look it up when I need to. I’m not alone in this. We’re all going down the same blind cornered path together. Sooner or later, we’ll all run into a major crisis we never saw coming. And it’s because we’ve all been looking in different places.

40 years ago, to be well informed, you had to pay attention to mainstream news sources. It was the only option we had. We all got feed the same diet of information. Some of us retained more than others, but we all dined at the same table. Our knowledge capacity was first filled from these common news sources. Then, after that, we’d fill whatever nooks and crannies were left with whatever our unique interests might be. But we all, to some extent, shared a common context. Knowledge may not have been deep, but it was definitely broad.

Now, if I choose to learn more about the Greek economy, I certainly have plenty of opportunities to do so. But I’d be starting with a blank slate. It would take some work to get up to speed. So I have to decide whether it’s worth the effort for me to inform myself. Is the return worth the investment? Something has to tip the balance to make it important enough to learn more about whatever it is the Greeks are referendumming (referendering?) about. And in the meantime, there will be a lot of other things competing for that same limited supply of information gathering attention. Tomorrow, for instance, it might become really important for me to find out how close BC is to legalizing pot, or what the wild fire hazard is in Northern Saskatchewan, or what July’s weather is like in Chiang Mai. All of these things are relatively easy to find, but I have to reserve enough retention capacity to use the information once I find it. Information may want to be free, but the resources required to utilize it depletes our limited stores of cognitive ability.

Perhaps we’re saving more of our attention for on demand information requirements. Or maybe we’re just filtering out more of what we used to call news. Whatever the cause, I think we’re loosing our common cultural context, bit by byte. A community is defined by what it has in common, and the more technology allows us to pursue our individual interests, the more we surrender the common narratives that used to bind us.

How Activation Works in an Absolute Value Market

As I covered last week, if I mention a brand to you – like Nike, for instance – your brain immediately pulls back your own interpretation of the brand. What has happened, in a split second, is that the activation of that one node – let’s call it the Nike node – triggers the activation of several related nodes in your brain, which is quickly assembled into a representation of the brand Nike. This is called Spreading Activation.

This activation is all internal. It’s where most of the efforts of advertising have been focused over the past several decades. Advertising’s job has been to build a positive network of associations so when that prime happens, you have a positive feeling towards the brand. Advertising has been focused on winning territory in this mental landscape.

Up to now, we have been restricted to this internal landscape when making consumer decisions by the boundaries of our own rationality. Access to reliable and objective information about possible purchases was limited. It required more effort on our part than we were willing to expend. So, for the vast majority of purchases, these internal representations were enough for us. They acted as a proxy for information that lay beyond our grasp.

But the world has changed. For almost any purchase category you can think of, there exists reliable, objective information that is easy to access and filter. We no longer are restricted to internal brand activations (relative values based on our own past experiences and beliefs). Now, with a few quick searches, we can access objective information, often based on the experiences of others. In their book of the same name, Itimar Simonson and Emanuel Rosen call these sources “Absolute Value.” For more and more purchases, we turn to external sources because we can. The effort invested is more than compensated for the value returned. In the process, the value of traditional branding is being eroded. This is truer for some product categories than others. The higher the risk or the level of interest, the more the prospect will engage in an external activation. But across all product categories, there has been a significant shift from the internal to the external.

What this means for advertising is that we have to shift our focus from internal spreading activations to external spreading activations. Now, when we retrieve an internal representation of a product or brand, it typically acts as a starting point, not the end point. That starting point is then to be modified or discarded completely depending on the external information we access. The first activated node is our own initial concept of the product, but the subsequent nodes are spread throughout the digitized information landscape.

In an internal spreading activation, the nodes activated and the connections between those nodes are all conducted at a subconscious level. It’s beyond our control. But an external spreading activation is a different beast. It’s a deliberate information search conducted by the prospect. That means that the nodes accessed and the connections between those nodes becomes of critical importance. Advertisers have to understand what those external activation maps look like. They have to be intimately aware of the information nodes accessed and the connections used to get to those nodes. They also have to be familiar with the prospect’s information consumption preferences. At first glance, this seems to be an impossibly complex landscape to navigate. But in practice, we all tend to follow remarkable similar paths when establishing our external activation networks. Search is often the first connector we use. The nodes accessed and the information within those nodes follow predictable patterns for most product categories.

For the advertiser, it comes down to a question of where to most profitably invest your efforts. Traditional advertising was built on the foundation of controlling the internal activation. This was the psychology behind classic treatises such as Ries and Trout’s “Positioning, The Battle for Your Mind.” And, in most cases, that battle was won by whomever could assemble the best collection of smoke and mirrors. Advertising messaging had very little to do with facts and everything to do with persuasion.

But as Simonsen and Rosen point out, the relative position of a brand in a prospect’s mind is becoming less and less relevant to the eventual purchase decision. Many purchases are now determined by what happens in the external activation. Factual, reliable information and easy access to that information becomes critical. Smoke and mirrors are relegated to advertising “noise” in this scenario. The marketer with a deep understanding of how the prospect searches for and determines what the “truth” is about a potential product will be the one who wins. And traditional marketing is becoming less and less important to that prospect.

 

The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.

Our Brain on Books

Brain-on-BooksHere’s another neuroscanning study out of Emory University showing the power of a story.

Lead researcher Gregory Burns and his team wanted to “understand how stories get into your brain, and what they do to it.” Their findings seem to indicate that stories, in this case a historical fiction novel about Pompeii, caused a number of changes in the participants brain, at least in the short term. Over time, some of these changes decayed, but more research is required to determine how long lasting the changes are.

One would expect reading to alter related parts of the brain and this was true in the Emory study. The left temporal cortex, a section of the brain that handles language reception and interpretation showed signs of heightened connectivity for a period of time after reading the novel. This is almost like the residual effects of exercise on a muscle, which responds favorably to usage.

What was interesting, however, was that the team also saw increased connectivity in the areas of the brain that control representations of sensation for the body. This relates to Antonio Damasio’s “Embodied Semantics” theory where the reading of metaphors, especially those relating specifically to tactile images, activate the same parts of the brain that control the corresponding physical activity. The Emory study (and Damasio’s work) seems to show that if you read a novel that depicts physical activity, such as running through the streets of Pompeii as Vesuvius erupts, your brain is firing the same neurons as it would if you were actually doing it!

There are a number of interesting aspects to consider here, but what struck me is the multi-prong impact a story has on us. Let’s run through them:

Narratives have been shown to be tremendously influential frameworks for us to learn and update our sense of the world, including our own belief networks. Books have been a tremendously effect agent for meme transference and propagation. The structure of a story allows us to grasp concepts quickly, but also reinforces those concepts because it engages our brain in a way that a simple recital of facts could not. We relate to protagonists and see the world through their eyes. All our socially tuned, empathetic abilities kick into action when we read a story, helping to embed new information more fully. Reading a story helps shape our world view.

Reading exercises the language centers of our brain, heightening the neural connectivity and improving the effectiveness. Neurologists call this “shadow activity” – a concept similar to muscle memory.

Reading about physical activity fires the same neurons that we would use to do the actual activity. So, if you read an action thriller, even through you’re lying flat on a sofa, your brain thinks you’re the one racing a motorcycle through the streets of Istanbul and battling your arch nemesis on the rooftops of Rome. While it might not do much to improve muscle tone, it does begin to create neural pathways. It’s the same concept of visualization used by Olympic athletes.

For Future Consideration

As we learn more about the underlying neural activity of story reading, I wonder how we can use this to benefit ourselves? The biggest question I have is if a story in written form has this capacity to impact us at all the aforementioned levels, what would  more sense-engaged media like television or video games do? If reading about a physical activity tricks the brain into firing the corresponding sensory controlling neurons, what would happen if we are simulating that activity on an action controlled gaming system like Microsoft’s X Box? My guess would be that the sensory motor connections would obviously be much more active (because we’re physically active). Unfortunately, research in the area of embodied semantics is still at an early stage, so many of the questions have yet to be answered.

However, if our stories are conveyed through a more engaging sensory experience, with full visuals and sound, do we lose some opportunity for abstract analysis? The parts of our brain we use to read depend on relatively slow processing loops. I believe much of the power of reading lies in the requirements it places on our imagination to fill in the sensory blanks. When we read about a scene in Pompeii we have to create the visuals, the soundtrack and the tactile responses. In all this required rendering, does it more fully engage our sense-making capabilities, giving us more time to interpret and absorb?