The Medium is the Message, Mr. President

Every day that Barack Obama was in the White House, he read 10 letters. Why letters? Because form matters. There’s still something about a letter. It’s so intimate. It uses a tactile medium. Emotions seem to flow easier through the use of cursive loops and sound of pen on paper. They balance between raw and reflective. As such, they may be an unusually honest glimpse into the soul of the writer. Obama seemed to get that. There was an entire team of hundreds of people at the White House that reviewed 10,000 letters a day and chose the 10 that made it to Obama, but the intent was to give an unfiltered snapshot of the nation at any given time. It was a mosaic of personal stories that – together – created a much bigger narrative.

Donald Trump doesn’t read letters. He doesn’t read much of anything. The daily presidential briefing has been dumbed down to media more fitting of the President’s 140 character long attention span. Trump likes to be briefed with pictures and videos. His information medium of choice? Cable TV. He has turned Twitter into his official policy platform.

Today, technology has exponentially multiplied the number of communication media we have available to us. And in that multiplicativity, Marshall McLuhan’s 50-year-old trope about the medium being the message seems truer than ever. The channels we chose – whether we’re on the sending or receiving end – carry their own inherent message. They say who we are, what we value, how we think. They intertwine with the message, determining how it will be interpreted.

I’m sad that letter writing is a dying art, but I’m also contributing to its demise. It’s been years since I’ve written a letter. I do write this column, which is another medium. But even here I’m mislabeling it. Technically this is a blog post. A column is a concept embedded in the medium of print – with its accompanying physical restriction of column inches. But I like to call it a column, because in my mind that carries its own message. A column comes with an implicit promise between you – the readers – any myself, the author. Columns are meant to be regularly recurring statements of opinion. I have to respect the fact that I remain accountable for this Tuesday slot that MediaPost has graciously given me. Week after week, I try to present something that I hope you’ll find interesting and useful enough to keep reading. I feel I owe that to you. To me, a “post” feels more ethereal – with less of an ongoing commitment between author and reader. It’s more akin to a drive-by-writing.

So that brings me to one of the most interesting things about letters and President Obama’s respect for them. They are meant to be a thoughtful medium between two people. The thoughts captured within are important enough to the writer that they’re put in print but they are intended just for the recipient. They are one of the most effective media ever created to ask for empathetic understanding from one person in particular. And that’s how Obama’s Office of Presidential Correspondence treated them. Each letter represented a person who felt strongly enough about something that they wanted to share it with the President personally. Obama used to read his ten letters at the end of the day, when he had time to digest and reflect. He often made notations in the margins asking pointed questions of his staff or requesting more investigation into the circumstances chronicled in a letter. He chose to set aside a good portion of each day to read letters because he believed in the message carried by the medium: Individuals – no matter who they are – deserve to be heard.

The Death of Sears and the Edge of Chaos

So, here’s the question: Could Sears – the retail giant who has become the poster child for the death of mall-based retail shopping – have saved themselves? It’s an important question, because I don’t think Sears was an isolated incident.

In 2006, historian Richard Longstreth explored the rise and fall of Sears. The rise is well chronicled. From their beginnings in 1886, Richard Sears and Alvah Roebuck grew to dominate the catalog mail order landscape. They prospered by creating a new way of shopping that catered specifically to the rural market of America, a rapidly expanding opportunity created by the Homestead Act of 1862. The spreading of railroads across the continent through the 1860’s and 70’s allowed Sears to distribute physical goods across the nation. This, combined with their quality guarantee and free return policy, allowed Sears to rapidly grow to a position of dominance.

In the 1920’s and 30’s, Robert E. Wood, the fourth president of Sears, took the company in a new direction. He reimagined the concept of a physical retail store, convincing the reluctant company to expand from its very lucrative catalog business. This was directly driven by Sear’s foundation as a mail order business. In essence, Woods was hedging his bet. He built his stores far from downtown business centers, where land was cheap. And, if they failed as retail destinations, they could always be repurposed as mail order distribution and fulfillment centers. But Wood got lucky. Just about the time he made this call, America fell in love with the automobile. They didn’t mind driving a little bit to get to a store where they could save some money. This was followed by the suburbanization of America. When America moved to the suburbs, Sears was already there.

So, you could say Sears was amazingly smart with its strategy, presciently predicting two massive disruptions in the history of consumerism in America. Or you could also say that Sears got lucky and the market happened to reward them – twice. In the language of evolution, two fortuitous mutations of Sears led to them being naturally selected by the marketplace. But, as Longstreth showed, their luck ran out on the third disruption, the move to online shopping.

A recent article looking back at Longstreth’s paper is titled “Could Sears Have Avoided Becoming Obsolete?”

I believe the answer is no. The article points to one critical strategic flaw as the reason for Sear’s non-relevance: doubling down on their mall anchor strategy as the world stopped going to malls. In hindsight, this seems correct, but the fact is, it was no longer in Sears DNA to pivot into new retail opportunities. They couldn’t have jumped on the e-com bandwagon, just as a whale can’t learn how to fly. It’s easy for historians to cast a gaze backwards and find reasons for organizational failure, just as it’s easy to ascribe past business success to a brilliant strategy or a visionary CEO. But the fact is, as business academic Phil Rosenzweig shows in his masterful book The Halo Effect, we’re just trying to jam history into a satisfying narrative. And narratives crave cause and effect. We look for mistakes that lead to obsolescence. This gives us the illusion that we could avoid the same fate, if only we are smarter. But it’s not that simple. There are bigger forces at play here. And they can be found at the Edge of Chaos.

Edge of Chaos Theory

In his book, Complexity: Life at the Edge of Chaos, Roger Lewin chronicles the growth of the Santa Fe Institute, an academic think tank that has been dedicated to exploring complexity for the last 33 years now. But the “big idea” in Lewin’s book is the Edge of Chaos Theory, a term coined by mathematician Doyne Farmer to describe a discovery by computer scientist Christopher Langton.

The theory, in its simplest form, is this: On one side you have chaos, where there is just too much dynamic activity and instability for anything sustainable to emerge. On the other side you have order, where rules and processes are locked in and things become frozen solid. These are two very different states that can apply to biology, sociology, chemistry, physics, economics – pretty much any field you can think of.

To go from one state – in either direction – is a phase transition. Everything changes when you move from one to the other. On one side, turmoil crushes survivability. One the other, inertia smothers change. But in between there is a razor thin interface, balanced precipitously on the edge of chaos. Theorists believe that it’s in this delicate interface where life forms, where creativity happens and where new orders are born.

For any single player, it’s almost impossible to maintain this delicate balance. As organizations grow, I think they naturally move from chaos to order, at some point moving through this exceptional interface where the magic happens. Some companies manage to move through this space a few times. Apple is such a company. Sears probably moved through the space twice, once is setting their mail order business up and once with their move to suburban retail. But sooner or later, organizations go through their typical life cycle and inevitably choose order over chaos. At this point, their DNA solidifies to the point where they can no longer rediscover the delicate interface between the two.

It’s at the market level where we truly see the Edge of Chaos theory play out. The theory contests that adaptive systems in which there is feedback continually adapt to the Edge of Chaos. But, as in any balancing act, it’s a very dynamic process. In the case of sociological evolution, it’s often a force (or convergence of forces) of technology that catalyzes the phase transition from order back to chaos. This is especially true when we look at markets. But this is an oscillation between order and chaos, with the market switching from phases of consolidation and verticalization to phases of chaos and sweeping horizontal activation. Markets will swing back and forth but will constantly be rewarding winners that live closest to the edge between the two states.

We all love to believe that immortality can be captured in our corporate form, whether it be our company or our own body. But history shows that we all have a natural life cycle. We may be lucky enough to extend our duration in that interface on the edge of chaos, but sooner or later our time there will end. Just as it did with Sears.

 

 

 

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.

My Other Life – On Two Wheels

IMG_9611If you’re reading this blog, you probably know me as a digital marketing/UX guy. But I have another life..not so much shrouded in mystery as just newly revealed. I love riding my bike. And it’s when I wear that hat (or helmet) that Western Living Magazine asked for input. They wanted 5 Great Road Rides in the Okanagan. I obliged. If you’ve come here looking for that, I will redirect you to my G.O. Cycling Blog. If you’ve come here looking for how the ventromedial prefrontal cortex correlates with online foraging activities – well then, God help you, you’ve come to the right place!

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

How I Cleared a Room Full of Marketing Techies

Was it me?

Was it something I said?

I don’t think so. I think it was just that I was talking about B2B.

Let me explain.

Last week, I was in San Francisco talking at a marketing technology conference. My session, in which I was a co presenter, was going to be about psychographic profiling and A.I. – in B2B marketing. It was supposed to start immediately after another session on “cognitive marketing”. During this prior session, I decided to stand at the back at the room so I didn’t take up a seat.

That proved to be a mistake. During the session, which was in one of three tracks running at the time, the medium sized room filled to standing room only capacity. The presenter talked about how machine learning – delivered via IBM’s Watson, Google’s DeepMind or Amazon’s Cloud AI solution – is going to change marketing and, along with it, the job of a human marketer.

I found it interesting. The audience seemed to think so as well. The presenter wrapped up – the moderator got up to thank him and introduce me as the next presenter – and about 60% of the room stood as one and headed for the exit door, creating a solid human wall between myself and the stage. It took me – the fish – about 5 minutes of proverbially and physically swimming upstream before I could get to the stage. It wasn’t the smoothest of transitions.

I tend to take these things personally. But I honestly don’t think it was me. I think it was the fact that “B2B” was in the title of my presentation. I have found that as soon as you slap that label on anything, marketers tend to swarm in the opposite direction. If there is a B2B track at a general marketing show, you can bet your authentic Adam West Batman action figure (not that I would have any such thing) that it’s tucked away in some far-off corner of the conference center, down three flights of escalators, where you turn right and head towards the parking garage. My experience at this past show was analogous to the lot of B2B marketing in general. Whenever we start talking about it, people start heading for the door.

I don’t get it.

It’s not a question of budget. Even in terms of marketing dollars, a lot of budget gets allocated for B2B. An Outsell report for 2016 pegged the total US B2B marketing spend at about $151 billion. That compares respectfully with the total consumer Ad Spend of $192 billion, according to eMarketer.

And it’s definitely not a question of market size. It’s very difficult to size the entire B2B market, but there’s no doubt that it’s huge. A Forrester report estimates that $8 trillion was sold in the US B2B retail space in 2014. That’s almost half of the US gross domestic product that year. And a huge swath of the business is happening online. The worldwide B2B eCommerce market is projected to be $6.7 trillion by 2020. That’s twice as big as the projected online B2C market ($3.2 trillion).

So what gives? B2B is showing us the money. Why are we not showing it any love? Just digging up the background research for this column proved to be painful. Consumer spend and marketing dollar numbers come gushing off the page of even a half-assed Google search. But B2B stats? Cue the crickets.

I have come to the conclusion that it’s just lack of attention, which probably comes from a lack of sex appeal. B2B is like the debate club in high school. While everyone goes gaga during school assemblies over the cheerleading squad and the football team, the people who will one day rule the world quietly gather after class with Mr. Tilman in the biology lab to plot their debate strategy for next week’s match up against J.R. Matheson Senior High. It goes without saying that parents will be the only ones who actually show up. And even some of them will probably have to stay home to cut the grass.

Those debaters will probably all grow up to be B2B marketers.

It may also be that B2B marketing is hard. Like – juggling Rubik’s Cubes while simultaneously solving them – hard. At least, it’s hard if you dare to go past the “get a lead and hound them mercilessly until they either move to another country or give in and buy something to get you off their back” school of marketing. If you try to do something as silly as try to predict purchase behaviors you have the problem of compound complexity. We have been trying for some time, with limited success, to predict a single consumer’s behavior. In B2B, you have to predict what might happen when you assemble a team of potential buyers – each with their own agenda, emotions and varying degrees of input – and ask them to come to a consensus on an organizational buying decision.

That can make your brain hurt. It’s a wicked problem to the power of 5.4 (the average number of buyers involved in a B2B buying decision- according to CEB’s research). It’s the Inconvenient Truth of Marketing.

That, I keep telling myself, is why everyone was rushing for the door the minute I started walking to the stage. I shouldn’t take it personally.

Disruption in the Rear View Mirror

Oh..it’s so easy to be blasé. I always scan the Mediapost headlines each week to see if there’s anything to spin. I almost skipped right past a news post by Larissa Faw – Zenith: Google Remains Top-Ranked Media Company By Ad Revenue

“Of course Google is the top ranked media company,” I yawned as I was just about to click on the next email in my inbox. Then it hit me. To quote Michael Bublé, “Holy Shitballs, Mom!”

Maybe that headline doesn’t seem extraordinary in the context of today, but I’ve been doing this stuff for almost 20 years now, and in that context – well-it’s huge! I remembered a column I wrote ages ago about speculating that Google had barely scratched its potential. After a little digging, I found it. It was in October, 2006, so just over a decade ago. Google had just passed the 6 billion dollar mark in annual revenue. Ironically, that seemed a bigger deal then their current revenue of almost $80 billion seems today. In that column, I pushed to the extreme and speculated that Google could someday pass $200 billion in revenue. While we’re still only 1/3 of the way there, the claim doesn’t seem nearly as ludicrous as it did back then.

But here’s the line that really made me realize how far we’ve come in the ten and a half years since I wrote that column: “Google and Facebook together accounted for 20% of global advertising expenditure across all media in 2016, up from 11% in 2012. They were also responsible for 64% of all the growth in global ad spend between 2012 and 2016.”

Two companies that didn’t exist 20 years ago now account for 20% of all global advertising expenditure. And the speed with which they’re gobbling advertising budgets is accelerating. If you’re a dilettante student of disruption, as I am, those are pretty amazing numbers. In the day-to-day of Mediapost – and digital marketing in general – we tend to accept all this as normal. It’s like we’re surfing on top of a wave without realizing the wave is 300 freakin’ feet high. Sometimes, you need to zoom out a little to realizing how momentous the every day is. And if you look at this on a scale of decades rather than days, you start to get a sense that the speed of change is massive.

To me, the most interesting thing about this is that both Google and Facebook have introduced a fundamentally new relationship between advertising and it’s audience. Google’s outré is – of course – intent based advertising. And Facebook’s is based on socially mediated network effects. Both of these things required the overlay of digital connection. That – as they say – has made all the difference. And there is where the real disruption can be found. Our world has become a fundamentally different place.

Much as we remain focused on the world of advertising and marketing here in our little corner of the digital world, it behooves us to remember that advertising is simply a somewhat distorted reflection of the behaviors of the world in general. It things are being disrupted here, it is because things are being disrupted everywhere. As it regards us beings of flesh, bone and blood, that disruption has three distinct beachheads: the complicated relationship between our brains and the digital tools we have at our disposal, the way we connect with each other, and a dismantling of the restrictions of the physical world at the same time we build the scaffolding of a human designed digital world. Any one of these has the potential to change our species forever. With all three bearing down on us, permanent change is a lead-pipe cinch.

Thirty years is a nano-second in terms of human history. Even on the scale of my lifetime, it seems like yesterday. Reagan was president. We were terrorized by the Unabomber. News outlets were covering the Iran-Contra affair. U2 released the Joshua Tree. Platoon won the best picture Oscar. And if you wanted to advertise to a lot of people, you did so on a major TV network with the help of a Madison Avenue agency. 30 years ago, nothing of which I’m talking about existed. Nothing. No Google. No Facebook. No Internet – at least, not in a form any of us could appreciate.

As much as advertising has changed in the past 30 years, it has only done so because we – and the world we inhabit – have changed even more. And if that thought is a little scary, just think what the next 30 years might bring.