Photos: Past, Present and Future

I was at a family reunion this past week. While there, my family did what families do at reunions: We looked at family photos.

In our case, our photographic history started some 110 years or so ago, with my great-great grandfather George and his wife Kezia. We have a stunning  picture of the couple, with Kezia wearing an ostrich feather hat.

George and Kezia Ching – Redondo Beach

At the time of the photo, George was an ostrich feather dyer in Hollywood, California. Apparently, there was a need for dyed ostrich feathers in turn-of-the-century Hollywood. That need didn’t last for long. The bottom fell out of the ostrich feather market and George and Kezia turned their sights north of the 49th, high-tailing it for Canada.

We’re a lucky family. We have four generations of photographic evidence of my mother’s forebears. They were solidly middle class and could afford the luxury of having a photo taken, even around the turn of the century. There were plenty of preserved family images that fueled many conversations and sparked memories as we gathered the clan.

What was interesting to me is that some 110 years after this memorable portrait was taken, we also took many new photos so we could remember this reunion in the future.  With all the technological change that has happened since George and Kezia posed in all their ostrich-feather-accessorized finery, the basic format of a two-dimensional visual representation was still our chosen medium for capturing the moment.

We talk about media a lot here at MediaPost — enough that it’s included in the headline of the post you’re reading. I think it’s worth a quick nod of appreciation to media that have endured for more than a century. Books and photos both fall into this category. Great-Great Grandfather George might be a bit flustered if he was looking at a book on a Kindle or viewing the photo on an iPhone, but the format of the medium itself would not be that foreign to him. He would be able to figure it out.

What dictates longevity in media? I think we have an inherent love for media that are a good match for both our senses and our capacity to imagine. Books give us the cognitive room to imagine worlds that no CGI effect has yet been able to match. And a photograph is still the most convenient way to render permanent the fleeting images that chase across our visual cortex. This is all the more true when those images are comprised of the faces we love. Like books, photos also give our minds the room to fill in the blanks, remembering the stories that go with the static image.

Compare a photo to something like a video. We could easily have taken videos to capture the moment. All of has had a pretty good video camera in our pocket. But we didn’t. Why not?

Again, we have to look at intended purpose at the moment of future consumption. Videos are linear. They force their own narrative arc upon us. We have to allocate the time required to watch the video to its conclusion. But a photo is randomly accessed. Our senses consume it at their own pace and prerogative, free of the restraints of the medium itself. For things like communal memories at a family reunion, a photo is the right match. There are circumstances where a video would be a better fit. This wasn’t one of them.

Our Family – 2019

There is one thing about photos that will be different moving forward. They are now in the digital domain, which means they can be stored with no restraints on space. It also means that we can take advantage of appended metadata. For the sake of my descendants, I hope this makes the bond between the photo and the stories a little more durable than what we currently deal with. If we were lucky, we had a quick notation on the back of an old photo to clarify the whos, whens and wheres.

A few of my more archivally inclined cousins started talking about the future generations of our family. When they remember us, what media would they be using? Would they be looking at the many selfies and digital shots that were taken in 2019 and try to remember who was that person between Cousin Dave and Aunt Lorna? What would be the platform used to store the photos? What will be the equivalent of the family album in 2119? How will they be archiving their own memories?

I suspect that if I were there, I wouldn’t be that surprised at the medium of choice.

Reality Vs Meta-Reality

“I know what I like, and I like what I know;”
Genesis

I watched the Grammys on Sunday night. And as it turned out, I didn’t know what I liked. And I thought I liked what I knew. But by the time I wrote this column (on Monday after the Grammys) I had changed my mind.

And it was all because of the increasing gap between what is real, and what is meta-real.

Real is what we perceive with our senses at the time it happens. Meta-real is how we reshape reality after the fact and then preserve it for future reference. And thanks to social media, the meta-real is a booming business.

Nobel laureate Daniel Kahneman first explored this with his work on the experiencing self and the remembering self. In a stripped-down example, imagine two scenarios. Scenario 1 has your hand immersed for 60 seconds in ice cold water that causes a moderate amount of pain. Scenario 2 has your hand immersed for 90 seconds. The first 60 seconds you’re immersed in water at the same temperature as Scenario 1, but then you leave you hand immersed for an additional 30 seconds while the water is slowly warmed by 1 degree.

After going through both scenarios and being told you have to repeat one of them, which would you choose? Logically speaking, you should choose 1. While uncomfortable, you have the benefit of avoiding an extra 30 seconds of a slightly less painful experience. But for those that went through it, that’s not what happened. Eighty percent who noticed that the water got a bit warmer chose to redo Scenario 2.

It turns out that we have two mental biases that kick in when we remember something we experienced:

  1. Duration doesn’t count
  2. Only the peak (best or worst moment) and the end of the experience are registered.

This applies to a lot more than just cold-water experiments. It also holds true for vacations, medical procedures, movies and even the Grammys. Not only that, there is an additional layer of meta-analysis that shifts us even further from the reality we actually experienced.

After I watched the Grammys, I had my own opinion of which performances I liked and those I didn’t care for. But that opinion was a work in progress. On Monday morning, I searched for “Best moments of Grammys 2019.” Rather quickly, my opinion changed to conform with what I was reading. And those summaries were in turn based on an aggregate of opinions gleaned from social media. It was Wisdom of Crowds – applied retroactively.

The fact is that we don’t trust our own opinions. This is hardwired in us. Conformity is something the majority of us look for. We don’t want to be the only one in the room with a differing opinion. Social psychologist Solomon Asch proved this almost 70 years ago. The difference is that in the Asch experiment, conformity happened in the moment. Now, thanks to our digital environment where opinions on anything can be found at any time, conformity happens after the fact. We “sandbox” our own opinions, waiting until we can see if they match the social media consensus. For almost any event you can name, there is now a market for opinion aggregation and analysis. We take this “meta” data and reshape our own reality to match.

It’s not just the malleability of our reality that is at stake here. Our memories serve as guides for the future. They color the actions we take and the people we become. We evolved as conformists because that was a much surer bet for our survival than relying on our own experiences alone.  But might this be a case of a good thing taken too far? Are we losing too much confidence in the validity of our own thoughts and opinions?

I’m pretty sure doesn’t matter what Gord Hotchkiss thinks about the Grammys of 2019. But I fear there’s much more at stake here.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

Branding in the Post Truth Age

If 2016 was nothing else – it was a watershed year for the concept of branding. In the previous 12 months, we saw a decoupling in the two elements we have always believed make up brands. As fellow Spinner Cory Treffiletti said recently:

“You have to satisfy the emotional quotient as well as the logical quotient for your brand.  If not, then your brand isn’t balanced, and is likely to fall flat on its face.”

But another Mediapost article highlighted an interesting trend in branding:

“Brands will strive to be ‘meticulously un-designed’ in 2017, according to WPP brand agency Brand Union.”

This, I believe, speaks to where brands are going. And depending on which side of the agency desk you happen to be on, this could either be good news or downright disheartening.

Let’s start with the logical side of branding. In their book Absolute Value, Itamar Simonson and Emanuel Rosen sounded the death knell for brands as a proxy for consumer information. Their premise, which I agree with, is that in a market that is increasingly moving towards perfect information, brands have lost their position of trust. We would rather rely on information that comes from non-marketing sources.

But brands have been aspiring to transcend their logical side for at least 5 decades now. This is the emotional side of branding that Treffiletti speaks of. And here I have to disagree with Simonson and Rosen. This form of branding appears to be very much alive and well, thank you. In fact, in the past year, this form of branding has upped the game considerably.

Brands, at their most potent, embed themselves in our belief systems. It is here, close to our emotional hearts, which mark the Promised Land for brands. Reid Montague’s famous Coke neuro-imaging experiment showed that for Coke drinkers, the brand became part of who they are. Research I was involved in showed that favored brands are positively responded to in a split second, far faster than the rational brain can act. We are hardwired to believe in brands and the more loved the brand, the stronger the reaction. So let’s look at beliefs for a moment.

Not all beliefs are created equal. Our beliefs have an emotional valence – some beliefs are defended more strongly than others. There is a hierarchy of belief defense. At the highest level are our Core beliefs; how we feel about things like politics and religion. Brands are trying to intrude on this core belief space. There has been no better example of this than the brand of Donald Trump.

Beliefs are funny things. From an evolutionary perspective, they’re valuable. They’re mental shortcuts that guide our actions without requiring us to think. They are a type of emotional auto-pilot. But they can also be quite dangerous for the same reason. We defend our beliefs against skeptics – and we defend our core beliefs most vigorously. Ration has nothing to do with it. It is this type of defense system that brands would love to build around themselves.

We like to believe our beliefs are unique to us – but in actual fact, beliefs also materialize out of our social connections. If enough people in our social network believe something is true, so will we. We will even create false memories and narratives to support the fiction. The evolutionary logic is quite simple. Tribes have better odds for survival than individuals, and our tribe will be more successful if we all think the same way about certain things. Beliefs create tribal cohesion.

So, the question is – how does a brand become a belief? It’s this question that possibly points the way in which brands will evolve in the Post-Truth future.

Up to now, brands have always been unilaterally “manufactured” – carefully crafted by agencies as a distillation of marketing messages and delivered to an audience. But now, brands are multilaterally “emergent” – formed through a network of socially connected interactions. All brands are now trying to ride the amplified waves of social media. This means they have to be “meme-worthy” – which really means they have to be both note and share-worthy. To become more amplifiable, brands will become more “jagged,” trying to act as catalysts for going viral. Branding messages will naturally evolve towards outlier extremes in their quest to be noticed and interacted with. Brands are aspiring to become “brain-worms” – wait, that’s not quite right – brands are becoming “belief-worms,” slipping past the rational brain if at all possible to lodge themselves directly in our belief systems. Brands want to be emotional shorthand notations that resonate with our most deeply held core beliefs. We have constructed a narrative of who we are and brands that fit that narrative are adopted and amplified.

It’s this version of branding that seems to be where we’re headed – a socially infectious virus that creates it’s own version of the truth and builds a bulwark of belief to defend itself. Increasingly, branding has nothing to do with rational thought or a quest for absolute value.

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

A New Definition of Order

The first time you see the University of Texas – Austin’s AIM traffic management simulator in action, you can’t believe it would work. It shows the intersection of two 12 lane, heavily trafficked roads. There are no traffic lights, no stop signs, none of the traffic control systems we’re familiar with. Yet, traffic zips through with an efficiency that’s astounding. It appears to be total chaos, but no cars have to wait more than a few seconds to get through the intersection and there’s nary a collision in site. Not even a minor fender bender.

Oh, one more thing. The model depends on there being no humans to screw things up. All the vehicles are driverless. In fact, if just one of the vehicles had a human behind the wheel, the whole system would slow dramatically. The probability of an accident would also soar.

The thing about the simulation is that there is no order – or, at least – there is no order that is apparent to the human eye. The programmers at the U of T seem to recognize this with a tongue in cheek nod to our need for rationality. This particular video clip is called “insanity.” There are other simulation videos available at the project’s website, including ones where humans drive cars at intersections controlled by stoplights. These seem much saner and controlled. They’re also much less efficient. And likely more dangerous. No simulation that includes a human factor comes even close to matching the efficiency of the 100% autonomous option.

The AIM simulation is complex, but it isn’t complicated. It’s actually quite simple. As cars approach the intersection, they signal to a central “manager” if they want to turn or go straight ahead. The manager predicts whether the vehicles path will intersect another vehicle’s predicted path. If it does, it delays the vehicle slightly until the path is clear. That’s it.

The complexity comes in trying to coordinate hundreds of these paths at any given moment. The advantage the automated solution has is that it is in communication with all the vehicles. What appears chaotic to us is actually highly connected and coordinated. It’s fluid and organic. It has a lot in common with things like beehives, ant colonies and even the rhythms of our own bodies. It may not be orderly in our rational sense, but it is natural.

Humans don’t deal very well with complexity. We can’t keep track of more than a dozen or so variables at any one time. We categorize and “chunk” data into easily managed sets that don’t overwhelm our working memory. We always try to simplify things down by imposing order. We use heuristics when things get too complex. We make gut calls and guesses. Most of the time, it works pretty well, but this system gets bogged down quickly. If we pulled the family SUV into the intersection shown in the AIM simulation, we’d probably jam on the brakes and have a minor mental meltdown as driverless cars zipped by us.

Artificial intelligence, on the other hand, loves complexity. It can juggle amounts of disparate data that humans could never dream of managing. This is not to say that computers are more powerful than humans. It’s just that they’re better at different things. It’s referred to as Moravec’s Paradox: It’s relatively easy to program a computer to do what a human finds hard, but it’s really difficult to get it to do what humans find easy. Tracking the trajectories and coordinating the flow of hundreds of autonomous cars would fall into the first category. Understanding emotions would fall into the second category.

This matters because, increasingly, technology is creating a world that is more dynamic, fluid and organic. Order, from our human perspective, will yield to efficiency. And the fact is that – in data rich environments – machines will be much better at this than humans.   Just like our perspectives on driving, our notions of order and efficiency will have to change.