The Mindful Democracy Manifesto

 

The best argument against democracy is a five-minute conversation with the average voter.

Winston Churchill

Call it the Frog in Boiling Water Syndrome. It happens when creeping changes in our environment reach a disruptive tipping point that triggers massive change – or – sometimes – a dead frog. I think we’re going through one such scenario now. In this case, the boiling water may be technology and the frog may be democracy.

As I said in Online Spin last week, the network effects of President-elect Donald Trump’s victory may be yet another unintended consequence of technology.

I walked through the dynamics I believe lay behind the election last week in some detail. This week, I want to focus more on the impact of technology on democratic elections in general. In particular, I wanted to explore the network effects of technology, the spread of information and sweeping populist movements like we saw on November 8th.

In an ideal world, access to information should be the bedrock of effective democracy. Ironically, however, now that we have more access than ever that bedrock is being chipped away. There has been a lot of finger pointing at the dissemination of fake news on Facebook, but that’s just symptomatic of a bigger ill. The real problem is the filter bubbles and echo chambers that formed on social networks. And they formed because friction has been eliminated. The way we were informed in this election looked very different from that in elections past.

Information is now spread more through emergent social networks than through editorially controlled media channels. That makes it subject to unintended network effects. Because the friction of central control has been largely eliminated, the spread of information relies on the rules of emergence: the aggregated and amplified behaviors of the individual agents.

When it comes to predicting behaviors of individual human agents, our best bet is placed on the innate behaviors that lie below the threshold of rational thought. Up to now, social conformity was a huge factor. And that rallying point of that social conformity was largely formed and defined by information coming from the mainstream media. The trend of that information over the past several decades has been to the left end of the ideological spectrum. Political correctness is one clear example of this evolving trend.

But in this past election, there was a shift in individual behavior thanks to the elimination of friction in the spread of information – away from social conformity and towards other primal behaviors. Xenophobia is one such behavior. Much as some of us hate to admit it, we’re all xenophobic to some degree. Humans naturally choose familiar over foreign. It’s an evolved survival trait. And, as American economist Thomas Schelling showed in 1971, it doesn’t take a very high degree of xenophobia to lead to significant segregation. He showed that even people who only have a mild preference to be with people like themselves (about 33%) would, given the ability to move wherever they wished, lead to highly segregated neighborhoods. Imagine then the segregation that happens when friction is essentially removed from social networks. You don’t have to be a racist to want to be with people who agree with you. Liberals are definitely guilty of the same bias.

What happened in the election of 2016 were the final death throes of the mythical Homo Politicus – the fiction of the rational voter. Just like Homo Economicus – who predeceased him/her thanks to the ground breaking work of psychologists Amos Tversky and Daniel Kahneman – much as we might believe we make rational voting choices, we are all a primal basket of cognitive biases. And these biases were fed a steady stream of misinformation and questionable factoids thanks to our homogenized social connections.

This was not just a right wing trend. The left was equally guilty. Emergent networks formed and headed in diametrically opposed directions. In the middle, unfortunately, was the future of the country and – perhaps – democracy. Because, with the elimination of information distributional friction, we have to ask the question, “What will democracy become?” I have an idea, but I’ll warn you, it’s not a particularly attractive one.

If we look at democracy in the context of an emergent network, we can reasonably predict a few things. If the behaviors of the individual agents are not uniform – if half always turn left and half always turn right – that dynamic tension will set up an oscillation. The network will go through opposing phases. The higher the tension, the bigger the amplitude and the more rapid the frequency of those oscillations. The country will continually veer right and then veer left.

Because those voting decisions are driven more by primal reactions than rational thought, votes will become less about the optimal future of the country and more about revenge on the winner of the previous election. As the elimination of friction in information distribution accelerates, we will increasingly be subject to the threshold mob effect I described in my last column.

So, is democracy dead? Perhaps. At a minimum, it is debilitated. At the beginning of the column, I quoted Winston Churchill. Here is another quote from Churchill:

Many forms of Government have been tried, and will be tried in this world of sin and woe. No one pretends that democracy is perfect or all-wise. Indeed it has been said that democracy is the worst form of Government except for all those other forms that have been tried from time to time.…

We are incredibly reluctant to toy with the idea of democracy. It is perhaps the most cherished ideal we cling to in the Western World. But if democracy is the mechanism for a never-ending oscillation of retribution, perhaps we should be brave enough to consider alternatives. In that spirit, I put forward the following:

Mindful Democracy.

The best antidote to irrationality is mindfulness – forcing our Prefrontal cortex to kick in and lift us above our primal urges. But how do we encourage mindfulness in a democratic context? How do we break out of our social filter bubbles and echo chambers?

What if we made the right to vote contingent on awareness? What if you had to take a test before you cast your vote? The objective of the test is simple: how aware were you not only of your candidate’s position and policies but – more importantly – that of the other side? You don’t have to agree with the other side’s position; you just have to be aware of it. Your awareness score would then be assigned as a weight to your vote. The higher your level of awareness, the more your vote would count.

I know I’m tiptoeing on the edge of sacrilege here, but consider it a straw man. I’ve been hesitating in going public with this, but I’ve been thinking about it for some time and I’m not so sure it’s worse than the increasingly shaky democratic status quo we currently have. It’s equally fair to the right and left. It encourages mindfulness. It breaks down echo chambers.

It’s worth thinking about.

Mobs, Filter Bubbles and Democracy

You know I love to ask “why”? And last Tuesday provided me with the mother of all “whys”. I know there will be a lot of digital ink shed on this – but I just can’t help myself.

So..why?

Eight years ago, on Mediapost, I wrote that we had seen a new type of democracy. I still think I was right. What I didn’t know at the time was that I had just seen one side of a more complex phenomenon. Tuesday we saw another side. And we’re still reeling from it.

It’s not the first time we’ve seen this. Trump’s ascendancy is following the same playbook as Brexit, Marine Le Pen’s right winged attack in France and Rodrigo Duterte’s recent win for the presidency of the Philippines. Behind all these things, there are a few factors at play. Together, they combine to create a new social phenomenon. And, when combined with traditional democratic vehicles, they can cause bad things to happen to good people.

The FYF (F*&k You Factor)

Michael Moore absolutely nailed what happened Tuesday night, even providing a state-by-state, vote-by-vote breakdown of what went down – but he did it back in July. And he did it because he and Trump are both masters of the FYF. Just like you can’t bullshit a bullshitter – you can’t propagandize a propagandist. Trump had borrowed a page out of Moore’s playbook and Moore could see it coming a mile away.

The FYF requires two things – fear and anger. Anger comes from the fear. Typically, it’s fear of – and anger about – something you feel is beyond your control. This inevitably leads to a need to blame someone or something. The FYF master first creates the enemy, and then gives you a way to say FY to them. In Moore’s words, “The Outsider, Donald Trump, has arrived to clean house! You don’t have to agree with him! You don’t even have to like him! He is your personal Molotov cocktail to throw right into the center of the bastards who did this to you!”

What Michael Moore knew – and what the rest of us would figure out too late – was that for half the US, this wasn’t a vote for president. This was a vote for destruction. The more outrageous that Trump seemed, the more destructive he would be. Whether it was intentional or note, Trump’s genius was in turning Clinton’s competence into a liability. He succeeded in turning this into a simple yes or no choice – vote for the Washington you know – and hate – or blow it up.

The Threshold Factor

The FYF provides the core – the power base. Trump’s core was angry white men. But then you have to extend beyond this core. That’s where mob mentality comes in.

In 1978, Mark Granovetter wrote a landmark paper on threshold models of behavior. I’ll summarize. Let’s say you have two choices of behavior. One is to adhere to social and behavioral norms. Let’s call this the status quo option. The other is to do something you wouldn’t normally do, like defy your government – let’s call this the F*&k You option. Which option you choose is based on a risk/reward calculation.

What Granovetter realized is that predicting the behavior of a group isn’t a binary model – it’s a spectrum. In any group of people, you are going to have a range of risk/reward thresholds to get over to go from one behavioral alternative to the other. Being social animals, Granovetter theorized the deciding factor was the number of other people we need to see who are also willing to choose option 2 – saying F*&k you. The more people willing to make that choice, the lower the risk that you’ll be singled out for your behavior. Some people don’t need anyone – they are the instigators. Let’s give them a “0”. Other people may never join the mob mentality, even if everyone else is. We’ll give them a “100.” In between you have all the rest, ranging from 1 to 99.

The instigators start the reaction. Depending on the distribution of thresholds, if there are enough 1, 2, 3’s and so forth, the bandwagon effect happens quickly, spreading through the group. It isn’t until you hit a threshold gap that the chain reaction stops. For example, if you have a small group of 1’s, 2’s and 3’s, but the next lowest threshold is 10, the movement may be stopped in its tracks.

Network Effects and Filter Bubbles

None of what I’ve described so far is new. People have always been angry and mobs have always formed. What is new, however, is the nature of this particular mob.

As you probably deduced, the threshold model is one of network effects. It depends on finding others who share similar views. It you can aggregate a critical mass of low thresholds; you can trigger bigger bandwagon effects – maybe even big enough to jump threshold gaps.

Up to now, Granovetter’s Threshold model was constrained by geography. You had to have enough low threshold people in physical space to start the chain reaction. But we live in a different world. Now, you can have a groups of 0s, 1s and 2s living in Spokane, Washington, Pickensville, Alabama, and Marianna, Florida and they can all be connected online. When this happens, we have a new phenomenon – the Filter Bubble.

One thing we learned this election was how effective filter bubbles were. I have a little over 440 connections in Facebook. In the months and weeks leading up to the election, I saw almost no support for Trump in my feed. I agreed ideologically with the posts of almost everyone in my network. I suspect I’m not alone. I am sure Trump supporters had equally homogeneous feedback from their respective networks. This put us in what we call a filter bubble. In the geographically unrestricted network of online connections, our network nodes tend to be rather homogeneous ideologically.

Think about what this does to Granovetter’s threshold model. We fall into the false illusion that everyone thinks the same way we do. This reduces threshold gaps and accelerates momentum for non-typical options. It tips the balance away from risk and towards reward.

A New Face of Democracy

I believe these three factors set the stage for Donald Trump. I also believe they are threatening to turn democracy into never ending cycle of left vs. right backlashes. I want to explore this some more, but given that I’ve already egregiously exceeded my typical word count for Online Spin, we’ll have to pick up the thread next week.

Prospect Theory, Back Burners and Relationship Risk

What does relationship infidelity and consumer behavior have in common? Both are changing, thanks to technology – or, more specifically – the intersection between technology and our brains. And for you regular readers, you know that stuff is right in my wheelhouse.

drouin

Dr. Michelle Drouin

So I was fascinated by a recent presentation given by Dr. Michelle Drouin from Purdue University. She talked about how connected technologies are impacting the way we think about relationship investment.

The idea of “investing” in a relationship probably paints in a less romantic light then we typically think of, but it’s an accurate description. We calculate odds and evaluate risk. It’s what we do. Now, in the case of love, an admittedly heuristic process becomes even less rational. Our subliminal risk appraisal system is subjugated by a volatile cocktail of hormones and neurotransmitters. But – at the end of the day – we calculate odds.

If you take all this into account, Dr. Drouin’s research into “back burners” becomes fascinating, if not all that surprising. In the paper, back burners are defined as “a desired potential or continuing romantic/sexual partner with whom one communicates, but to whom one is not exclusively committed.” “Back burners” are our fall back bets when it comes to relationships or sexual liaisons. And they’re not exclusive to single people. People in committed relationships also keep a stable of “back burners.” Women keep an average of 4 potential “relationship” candidates from their entire list of contacts and 8 potential “liaison” candidates. Men, predictably, keep more options open. Male participants in the study reported an average of over 8 “relationship” options and 26 liaison “back burners.” Drouin’s hypothesis is that this number has recently jumped thanks to technology, especially with the connectivity offered through social media. We’re keeping more “back burners” because we can.

What does this have to do with advertising? The point I’m making is that this behavior is not unique. Humans treat pretty much everything like an open marketplace. We are constantly balancing risk and reward amongst all the options that are open to us, subconsciously calculating the odds. It’s called Prospect Theory. And, thanks to technology, that market is much larger than it’s ever been before. In this new world, our brain has become a Vegas odds maker on steroids.

In Drouin’s research, it appears that new technologies like Tinder, What’sapp and Facebook have had a huge impact on how we view relationships. Our fidelity balance has been tipped to the negative. Because we have more alternatives – and it’s easier to stay connected with those alternatives and keep them on the “back burner” – the odds are worth keeping our options open. Monogamy may not be our best bet anymore. Facebook is cited in one-third of all divorce cases in the U.K. And in Italy, evidence from the social messaging app What’sapp shows up in nearly half of the divorce proceedings.

So, it appears that humans are loyal – until a better offer with a degree of risk we can live with comes along.

This brings us back to our behaviors in the consumer world. It’s the same mental process, applied in a different environment. In this environment, relationships are defined as brand loyalty. And, as Emanuel Rosen and Itamar Simonson show in their book Absolute Value, we are increasingly keeping our options open in more and more consumer decisions. When it comes to buying stuff – even if we have brand loyalty – we are increasingly aware of the “back burners” available to us.

 

 

 

Sorry Folks – Blame it on Ed

Just when you thought it was safe to assume I’d be moving on to another topic, I’m back. Blame it on Ed Papazian, who commented on last week’s column about the Rise of the Audience marketplace. I’ll respond to his comment in multiple parts. First, he said:

“I think it’s fine to speculate on “audience” based advertising, by which you actually mean using digital, not traditional media, as the basis for the advertising of the future.”

All media is going to be digital. Our concept of “traditional” media is well down its death spiral. We’re less then a decade away from all media being delivered through a digital platform that would allow for real time targeting of advertising. True, we have to move beyond the current paradigm of mass distributed, channel restricted advertising we seem stuck in, but the technology is already there. We (by which I mean the ad industry) just have to catch up. Ed continues in this vein:

“However, in a practical sense, not only is this, as yet, merely a dream for TV, radio and print media, but it is also an oversimplification.”

Is it an oversimplification? Let’s remember that more and more of our media consumption is becoming trackable from both ends. We no longer have to track from the point of distribution. Tracking is also possible at the point of consumption. We are living with devices that increasingly have insight into what we’re doing at any moment of the day. It’s just a matter of us giving permission to be served relevant, well targeted ads based on the context of our lives.

But what would entice us to give this permission? Ed goes on to say that…

“Even if a digital advertiser could actually identify every consumer in the U.S. who is interested—or “in the market” for what his ads are trying to sell and also how they are pitching the product/service—and send only these people “audience targeted ads”, many of the ads will still not be of interest…”

Papazian proposed an acid test of sorts (or, more appropriately – an antacid test):

“Why? Because they are for unpleasant or mundane products—toilet bowel cleansers, upset stomach remedies, etc.—-or because the ads are pitching a brand the consumer doesn’t like or has had a bad experience with.”

Okay, let me take up the challenge that Ed has thrown down (or up?). Are ads for stomach remedies always unwanted? Not if I have a history of heartburn, especially when my willpower drops and my diet changes as I’m travelling. Let’s take it one step further. I’ve made a dinner reservation for 7 pm at my favorite Indian food restaurant while I’m in San Francisco. It’s 2 pm. I’ve just polished off a Molinari’s sandwich and I’m heading back to my hotel. As I turn the corner at O’Farrell and Powell, an instant coupon is delivered to my phone with 50% off a new antacid tablet at the Walgreen’s ahead, together with the message: “Prosciutto, pepperoncinis and pakoras in the same day? Look at you go! But just in case…”

The world Ed talks about does have a lot of unwanted advertising. But in the world I’m envisioning, where audiences are precisely targeted, we will hopefully eliminate most of those unwanted ads. Those ads are the by-product of the huge inefficiencies in the current advertising marketplace. And it’s this inefficiency that is rapidly destroying advertising as we know it from both ends. The current market is built on showing largely ineffective ads to mainly disinterested prospects – hoping there is an anomaly in there somewhere – and charging the advertiser to do so. I don’t know about you, but that doesn’t sound like a sustainable plan to me.

When I talk about selecting audiences in a market, it’s this level of specificity that I’m talking about. There is nothing in the above scenario that’s beyond the reach of current Mar-Tech. Perhaps it’s oversimplified. But I did that to make a point. In paid search, we used to have a saying, “buy your best clicks first”. It meant starting with the obviously relevant keywords – the people who were definitely looking for you. The problem was that there just wasn’t enough volume on these “sure-bet” keywords alone. But as digital has matured, the amount of “sure-bet” inventory has increased. We’re still not all the way there – where we can rely on sure-bet inventory alone – but we’re getting closer. The audience marketplace I’m envisioning gets us much of the way there. When technology and data allow us to assemble carefully segmented audiences with a high likelihood of successful engagement on the fly, we eliminate the inefficiencies in the market.

I truly believe that it’s time to discard the jury-rigged, heavily bandaged and limping behemoth that advertising has become and start thinking about this in an entirely new way. Papazian’s last sentence in his comment was…

“You just can’t get around the fact that many ads are going to be unwanted, no matter how they are targeted….”

Do we have to accept that as our future? It’s certainly the present, but I would hate to think we can’t reach any higher. The first step is to stop accepting advertising the way we know it as the status quo. We’ll be unable to imagine tomorrow if we’re still bound by the limitations of today.

 

The Rise of the Audience Marketplace

Far be it from me to let a theme go before it has been thoroughly beaten to the ground. This column has hosted a lot of speculation on the future of advertising and media buying and today, I’ll continue in that theme.

First, let’s return to a column I wrote almost a month ago about the future of advertising. This was a spin-off on a column penned by Gary Milner – The End of Advertising as We Know It. In it, Gary made a prediction: “I see the rise of a global media hub, like a stock exchange, which will become responsible for transacting all digital programmatic buys.”

Gary talked about the possible reversal of fragmentation of markets by channel and geographic area due to the potential centralization of digital media purchasing. But I see it a little differently than Gary. I don’t see the creation of a media hub – or, at least – that wouldn’t be the end goal. Media would simply be the means to the end. I do see the creation of an audience market based on available data. Actually, even an audience would only be the means to an end. Ultimately, we’re buying one thing – attention. Then it’s our job to create engagement.

The Advertising Research Foundation has been struggling with measuring engagement for a long time now. But it’s because they were trying to measure engagement on a channel-by-channel basis and that’s just not how the world works anymore. Take search, for example. Search is highly effective at advertising, but it’s not engaging. It’s a connecting medium. It enables engagement, but it doesn’t deliver it.

We talk multi-channel a lot, but we talk about it like the holy grail. The grail in this cause is an audience that is likely to give us their attention and once they do that – is likely to become engaged with our message. The multi-channel path to this audience is really inconsequential. We only talk about multi-channel now because we’re stopping short of the real goal, connecting with that audience. What advertising needs to do is give us accurate indicators of those two likelihoods: how likely are they to give us their attention and what is their potential proclivity towards our offer. The future of advertising is in assembling audiences – no matter what the channel – that are at a point where they are interested in the message we have to deliver.

This is where the digitization of media becomes interesting. It’s not because it’s aggregating into a single potential buying point – it’s because it’s allowing us to parallel a single prospect along a path of persuasion, getting important feedback data along the way. In this definition, audience isn’t a static snapshot in time. It becomes an evolving, iterative entity. We have always looked at advertising on an exposure-by-exposure basis. But if we start thinking about persuading an audience that paradigm needs to be shifted. We have to think about having the right conversation, regardless of the channel that happens to be in use at the time.

Our concept of media happens to carry a lot of baggage. In our minds, media is inextricably linked to channel. So when we think media, we are really thinking channels. And, if we believe Marshall McLuhan, the medium dictates the message. But while media has undergone intense fragmentation they’ve also become much more measurable and – thereby – more accountable. We know more than ever about who lies on the other side of a digital medium thanks to an ever increasing amount of shared data. That data is what will drive the advertising marketplace of the future. It’s not about media – it’s about audience.

In the market I envision, you would specify your audience requirements. The criteria used would not be so much our typical segmentations – demography or geography for example. These have always just been proxies for what we really care about; their beliefs about our product and predicted buying behaviors. I believe that thanks to ever increasing amounts of data we’re going to make great strides in understanding the psychology of consumerism. These  will be foundational in the audience marketplace of the future. Predictive marketing will become more and more accurate and allow for increasingly precise targeting on a number of behavioral criteria.

Individual channels will become as irrelevant as the manufacturer that supplies the shock absorbers and tie rods in your new BMW. They will simply be grist for the mill in the audience marketplace. Mar-tech and ever smarter algorithms will do the channel selection and media buying in the background. All you’ll care about is the audience you’re targeting, the recommended creative (again, based on the mar-tech running in the background) and the resulting behaviors. Once your audience has been targeted and engaged, the predicted path of persuasion is continually updated and new channels are engaged as required. You won’t care what channels they are – you’ll simply monitor the progression of persuasion.

 

Media Buying is Just the Tip of Advertising’s Disruptive Iceberg

Two weeks ago, Gary Milner wrote a lucid prediction of what advertising might become. He rightly stated that advertising has been in a 40-year period of disruption. Bingo. He went on to say that he sees a consolidation of media buying into a centralized hub. Again, I don’t question the clarity of Milner’s crystal ball. It makes sense to me.

What is missing from Milner’s column, however, is the truly disruptive iceberg that is threatening to founder advertising as we know it – the total disruption of the relationship between the advertiser and the marketplace. Milner deals primarily with the media buying aspect of advertising but there’s a much bigger question to tackle. He touched on it in one sentence: “The fact is that a vast majority of advertising is increasingly being ignored.”

Yes! Exactly. But why?

I’ll tell you why. It’s because of a disagreement about what advertising should be. We (the buyers) believe advertising’s sole purpose is to inform. But the sellers believe advertising is there to influence buyers. And increasingly, we’re rejecting that definition.

I know. That’s a tough pill to swallow. But let’s apply a little logic to the premise. Bear with me.

Advertising was built on a premise of scarcity. Market places can’t exist without scarcity. There needs to be an imbalance to make an exchange of value worthwhile. Advertising exists because there once was a scarcity of information. We (the buyers) lacked information about products and services. This was primarily because of the inefficiencies inherent in a physical market. So, in return for the information, we traded something of value – our attention. We allowed ourselves to be influenced. We tolerated advertising because we needed it. It was the primary way we gained information about the marketplace.

In Milner’s column, he talks about Peter Diamandis’ 6 stages that drive the destruction of industries: digitalization, deception, disruption, demonetization, dematerialization, and democratization. Milner applied it to the digitization of media. But these same forces are also being applied to information and rather than driving advertising from disruption to a renaissance period, as Milner predicts, I believe we’ve barely scratched the surface of disruption. The ride will only get bumpier from here on.

The digitization of information enables completely new types of marketplaces. Consider the emergence of the two-sided markets that both AirBNB and Uber exemplify. Thanks to the digitization of information, entirely new markets have emerged that allow the flow of information between buyers and suppliers. Because AirBNB and Uber have built their business models astride these flows, they can get a cut of the action.

But the premise of the model is important to understand. AirBNB and Uber are built on the twin platforms of information and enablement. There is no attempt to persuade by the providers of the platforms – because they know those attempts will erode the value of the market they’re enabling. We are not receptive to persuasion (in the form of advertising) because we have access to information that we believe to be more reliable – user reviews and ratings.

The basic premise of advertising has changed. Information is no longer scarce. In fact, through digitization, we have the opposite problem. We have too much information and too little attention to allocate to it. We now need to filter information and increasingly, the filters we apply are objectivity and reliability. That turns the historical value exchange of advertising on its head. This has allowed participatory information marketplaces such as Uber, AirBNB and Google to flourish. In these markets, where information flows freely, advertising that attempts to influence feels awkward, forced and disingenuous. Rather than building trust, advertising erodes it.

This disruption has also driven another trend with dire consequences for advertising as we know it – the “Maker” revolution and the atomization of industries. There are some industries where any of us could participate as producers and vendors. The hospitality industry is one of these. The needs of a traveller are pretty minimal – a bed, a roof, a bathroom. Most of us could provide these if we were so inclined. We don’t need to be Conrad Hilton. These are industries susceptible to atomization – breaking the market down to the individual unit. And it’s in these industries where disruptive information marketplaces will emerge first. But I can’t build a refrigerator. Or a car (yet). In these industries, scale is still required. And these will be the last strongholds of mass advertising.

Milner talked about the digitization of media and the impact on advertising. But there’s a bigger change afoot – the digitization of information in marketplaces that previously relied on scarcity of information to prop up business models. As information goes from scarcity to abundance, these business models will inevitably fall.

Where Context Comes From

Fellow Spinner Cory Treffiletti told you last week that data without context is noise.

Absolutely right.

I want to continue that conversation, because it’s an important one. It’s all about context. So let’s talk a little more about context. And specifically how we decide what makes up that context.

You might have seen or heard the hubbub that emerged around a tweet from Neil Degrasse Tyson a month ago: “Earth needs a virtual country: #Rationalia, with a one-line Constitution: All policy shall be based on the weight of evidence”

Nice thought, but it ignited a social media shit-storm. Which was entirely predictable. Because we don’t want to be rational. We want to be human. Did 79 episodes of Star Trek teach us nothing?

The biggest beef against #Rationalia was that evidence is typically in the eyes of the beholder. It’s all a matter of context. I’m guessing that the policies that come from evidence in the hands of Republicans will not bear much resemblance to policies that come from the evidence of Democrats. The evidence could be the same but the context is different, because Democrats and Republicans think differently.

Like Treffiletti said – evidence without context is just noise. And our context is only marginally based on evidence. And that’s why #Rationalia – as intellectually attractive as it might be – won’t work.

We as humans understand the world through something called sense making. This is the process we use to build context. In 2006, psychologist Gary Klein shed new light on how we make sense of the world. We start with a frame that captures our current understanding of the situation and depending on the evidence presented to us, we decide whether to elaborate our frame or discard it and create a new frame. So, sensemaking is really an iterative loop that is constantly using our current frame as a reference point.

But here’s the thing. What we consider as evidence depends on the frame we already have in place. It’s the filter that determines what data we pay attention to. And much as Neil Degrasse Tyson would like the governments of the world to be totally unbiased in the filtering of evidence, “that dog just won’t hunt.” It can’t – because we can’t consider data without some context to put it in.

Perhaps someday artificial intelligence will advance to the point where it can pull unbiased context out of random data. Maybe computers will be able to do what we’re unable to – make sense of the noise without assuming a pre-existing frame. But we’re not there yet. And even if we were, we would simply look at the conclusions of the computer and decide whether we agree with them or not. As long as humans are in charge, there will always be a biased filter in place.

So back to Cory’s column. If context is so important, think about where that context is coming from. Who is defining the context and what frame are they operating from? That in turn will define what data you consider and how you consider it.

Perhaps the most important decision before considering data is to be totally clear about what the goal is. Goals, together with experience, form the underpinning of beliefs. Frames are then built on those beliefs. Context comes from those frames. And context is the filter we apply to evidence.

Can Stories Make Us Better?

In writing this column, I often put ideas on the shelf for a while. Sometimes, world events conspire to make one of these shelved ideas suddenly relevant. This happened this past weekend.

The idea that caught my eye some months ago was an article that explored whether robots could learn morality by reading stories. On the face of it, it was mildly intriguing. But early Sunday morning as the heartbreaking news filtered to me from Orlando, a deeper connection emerged.

When we speak of unintended consequence, which we have before, the media amplification of acts of terror are one of them. The staggeringly sad fact is that shocking casualty numbers have their own media value. And that, said one analyst who was commenting on ways to deal with terrorism, is a new reality we have to come to terms with. When we in the media business make stories news worthy we assign worth not just for news consumers but also to newsmakers – those troubled individuals who have the motivation and the means to blow apart the daily news cycle.

This same analyst, when asked how we deal with terrorism, made the point you can’t prevent lone acts of terrorism. The only answer is to use that same network of cultural connections we use to amplify catastrophic events to create an environment that dampens rather than intensifies violent impulse. We in the media and advertising industries have to use our considerable skills in setting cultural contexts to create an environment that reduces the odds of a violent outcome. And sadly, this is a game of odds. There are no absolute answers here – there is just a statistical lowering of the curve. Sometimes, despite your best efforts, the unimaginable still happens.

But how do you use the tools at our disposal to amplify morality? Here, perhaps the story I shelved some months ago can provide some clues.

In the study from Georgia Tech, Mark Riedl and Brent Harrison used stories as models of acceptable morality. For most of human history, popular culture included at least an element of moral code. We encoded the values we held most dear into our stories. It provided a base for acceptable behavior, either through positive reinforcement of commonly understood virtues (prudence, justice, temperance, courage, faith, hope and charity) or warnings about universal vices (lust, gluttony, greed, sloth, wrath, envy and pride). Sometimes these stories had religious foundations, sometimes they were secular morality fables but they all served the same purpose. They taught us what was acceptable behavior.

Stories were never originally intended to entertain. They were created to pass along knowledge and cultural wisdom. Entertainment came after when we discovered the more entertaining the story, the more effective it was at its primary purpose: education. And this is how the researchers used stories. Robots can’t be entertained, but they can be educated.

At some point in the last century, we focused on the entertainment value of stories over education and, in doing so, rotated our moral compass 180 degrees. If you look at what is most likely to titillate, sin almost always trumps sainthood. Review that list of virtues and vices and you’ll see that the stories of our current popular culture focus on vice – that list could be the programming handbook for any Hollywood producer. I don’t intend this a sermon – I enjoy Game of Thrones as much as the next person. I simply state it as a fact. Our popular culture – and the amplification that comes from it – is focused almost exclusively on the worst aspects of human nature. If robots were receiving their behavioral instruction through these stories, they would be programmed to be psychopathic moral degenerates.

For most of us, we can absorb this continual stream of anti-social programming and not be affected by it. We still know what is right and what is wrong. But in a world where it’s the “black swan” outliers that grab the news headlines, we have to think about the consequences that reach beyond the mainstream. When we abandon the moral purpose of stories and focus on their entertainment aspect, are we also abandoning a commonly understood value landscape?

If you’re looking for absolute answers here, you won’t find them. That’s just not the world we live in. And am I naïve when I say the stories we chose to tell may have an influence on isolated violent events such as happened in Orlando? Perhaps. Despite all our best intentions, Omar Mateen might still have gone horribly offside.

But all things and all people are, to some extent, products of their environment. And because we in media and advertising are storytellers, we set that cultural environment. That’s our job. Because of this, I belief we have a moral obligation. We have to start paying more attention to the stories we tell.

 

 

 

 

Ex Machina’s Script for Our Future

One of the more interesting movies I’ve watched in the past year has been Ex Machina. Unlike the abysmally disappointing Transcendence (how can you screw up Kurzweil – for God’s sake), Ex Machina is a tightly directed, frighteningly claustrophobic sci-fi thriller that peels back the moral layers of artificial intelligence one by one.

If you haven’t seen it, do so. But until you do, here’s the basic set up. Caleb Smith (Domhnall Gleeson) is a programmer at a huge Internet search company called Blue Book (think Google). He wins a contest where the prize is a week spent with the CEO, Nathan Bateman (Oscar Isaac) at his private retreat. Bateman’s character is best described as Larry Page meets Steve Jobs meets Larry Ellison meets Charlie Sheen – brilliant as hell but one messed up dude. It soon becomes apparent that the contest is a ruse and Smith is there to play the human in an elaborate Turing Test to determine if the robot Ava (Alicia Vikander) is capable of consciousness.

About half way through the movie, Bateman confesses to Smith the source of Ava’s intelligence “software.” It came from Blue Book’s own search data:

‘It was the weird thing about search engines. They were like striking oil in a world that hadn’t invented internal combustion. They gave too much raw material. No one knew what to do with it. My competitors were fixated on sucking it up, and trying to monetize via shopping and social media. They thought engines were a map of what people were thinking. But actually, they were a map of how people were thinking. Impulse, response. Fluid, imperfect. Patterned, chaotic.”

As a search behaviour guy – that sounded like more fact than fiction. I’ve always thought search data could reveal much about how we think. That’s why John Motavalli’s recent column, Google Looks Into Your Brain And Figures You Out, caught my eye. Here, it seemed, fiction was indeed becoming fact. And that fact is, when we use one source for a significant chunk of our online lives, we give that source the ability to capture a representative view of our related thinking. Google and our searching behaviors or Facebook and our social behaviors both come immediately to mind.

Motavalli’s reference to Dan Ariely’s post about micro-moments is just one example of how Google can peak under the hood of our noggins and start to suss out what’s happening in there. What makes this either interesting or scary as hell, depending on your philosophic bent, is that Ariely’s area of study is not our logical, carefully processed thoughts but our subconscious, irrational behaviors. And when we’re talking artificial intelligence, it’s that murky underbelly of cognition that is the toughest nut to crack.

I think Ex Machina’s writer/director Alex Garland may have tapped something fundamental in the little bit of dialogue quoted above. If the data we willingly give up in return for online functionality provides a blue print for understanding human thought, that’s a big deal. A very big deal. Ariely’s blog post talks about how a better understanding of micro-moments can lead to better ad targeting. To me, that’s kind of like using your new Maserati to drive across the street and visit your neighbor – it seems a total waste of horsepower. I’m sure there are higher things we can aspire to than figuring out a better way to deliver a hotels.com ad. Both Google and Facebook are full of really smart people. I’m pretty sure someone there is capable of connecting the dots between true artificial intelligence and their own brand of world domination.

At the very least, they could probably whip up a really sexy robot.

 

 

 

 

 

 

 

 

 

 

 

 

Why Marketers Love Malcolm Gladwell … and Why They Shouldn’t

Marketers love Malcolm Gladwell. They love his pithy, reductionist approach to popular science – his tendency to sacrifice verity for the sake of a good “Just-so” story. And in doing this, what is Malcolm Gladwell but a marketer at heart? No wonder our industry is ga-ga over him. We love anyone who can oversimplify complexity down to the point where it can be appropriated as yet another marketing “angle”.

Take the entire influencer advertising business, for instance. Earlier this year, I saw an article saying more and more brands are expanding their influencer marketing programs. We are desperately searching for that holy nexus where social media and those super-connected “mavens” meet. While the idea of influencer marketing has been around for a while, it really gained steam with the release of Gladwell’s “The Tipping Point.” And that head of steam seems to have been building since the release of the book in 2000.

As others have pointed out, Gladwell has made a habit of taking one narrow perspective that promises to “play well” with the masses, supporting it with just enough science to make it seem plausible and then enshrining it as a “Law.”

Take “The Law of the Few”, for instance, from The Tipping Point: “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts.” You could literally hear the millions of ears attached to marketing heads “perk up” when they heard this. “All we have to do,” the reasoning went, “is reach these people, plant a favorable opinion of our product and give them the tools to spread the word. Then we just sit back and wait for the inevitable epidemic to sweep us to new heights of profitability.”

Certainly commercial viral cascades do happen. They happen all the time. And, in hindsight, if you look long and hard enough, you’ll probably find what appears to be a “maven” near ground-zero. From this perspective, Gladwell’s “Law of the Few” seems to hold water. But that’s exactly the type of seductive reasoning that makes “Just So” stories so misleading. You mistakenly believe that because it happened once, you can predict when it’s going to happen again. Gladwell’s indiscriminate use of the term “Law” contributes to this common deceit. A law is something that is universally applicable and constant. When a law governs something, it plays out the same way, every time. And this is certainly not the case in social epidemics.

duncan-watts

Duncan Watts

If Malcolm Gladwell’s books have become marketing and pop-culture bibles, the same, sadly, cannot be said for Duncan Watts’ books. I’m guessing almost everyone reading this column has heard of Malcolm Gladwell. I further guess that almost none of you have heard of Duncan Watts. And that’s a shame. But it’s completely understandable.

Duncan Watts describes his work as determining the “role that network structure plays in determining or constraining system behavior, focusing on a few broad problem areas in social science such as information contagion, financial risk management, and organizational design.”

You started nodding off halfway through that sentence, didn’t you?

As Watts shows in his books, “Firms spent great effort trying to find “connectors” and “mavens” and to buy the influence of the biggest influencers, even though there was never causal evidence that this would work.” But the work required to get to this point is not trivial. While he certainly aims at a broad audience, Watts does not read like Gladwell. His answers are not self-evident. There is no pithy “bon mot” that causes our neural tumblers to satisfyingly click into place. Watts’ explanations are complex, counter-intuitive, occasionally ambiguous and often non-conclusive – just like the world around us. As he explains his book “Everything is Obvious: *Once You Know the Answer”, it’s easy to look backwards to find causality. But it’s not always right.

Marketers love simplicity. We love laws. We love predictability. That’s why we love Gladwell. But in following this path of least resistance, we’re straying further and further from the real world.