Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?

Will We Ever Let Robots Shop for Us?

Several years ago, my family and I visited Astoria, Oregon. You’ll find it at the mouth of the Columbia River, where it empties into the Pacific. We happened to take a tour of Astoria and our guide pointed out a warehouse. He told us it was filled with canned salmon, waiting to be labeled and shipped. I asked what brand they were. His answer was “All of them. They all come from the same warehouse. The only thing different is the label.”

Ahh… the power of branding…

Labels can make a huge difference. If you need proof, look no further than the experimental introduction of generic brands in grocery stores. Well, they were generic to begin with, anyway. But over time, the generic “yellow label” was replaced with a plethora of store brands. The quality of what’s inside the box hasn’t changed much, but the packaging has. We do love our brands.

But there’s often no rational reason to do so. Take the aforementioned canned salmon for example. Same fish, no matter what label you may stick on it. Brands are a trick our brain plays on us. We may swear our favorite brand tastes better than it’s competitors, but it’s usually just our brain short circuiting our senses and our sensibility. Neuroscientist Read Montague found this out when he redid the classic Pepsi taste test using a fMRI scanner. The result? When Coke drinkers didn’t know what they were drinking, the majority preferred Pepsi. But the minute the brand was revealed, they again sweared allegiance to Coke. The taste hadn’t changed, but their brains had. As soon as the brain was aware of the brand, some parts of it suddenly started lighting up like a pinball machine.

In previous research we did, we found that the brain instantly responded to favored brains the same way it did to a picture of a friend or a smiling face. Our brains have an instantaneous and subconscious response to brands. And because of that, our brains shouldn’t be trusted with buying decisions. We’d be better off letting a robot do it for us.

And I’m not saying that facetiously.

A recent post on Bloomberg.com looked forward 20 years and predicted how automation would gradually take over ever step of the consumer product supply chain, from manufacturing to shipping to delivery to our door. The post predicts that the factory floor, the warehouse, ocean liners, trucks and delivery drones will all be powered by Artificial intelligence and robotic labor. The first set of human hands that might touch a product would be those of the buyer. But maybe we’re automating the wrong side of the consumer transaction. The thing human hands shouldn’t be touching is the buy button. We suck at it.

We have taken some steps in the right direction. Itamar Simonson and Emanuel Rosen predicted a death of branding in their book Absolute Value:

“In the past the marketing function “protected” the organization in some cases. When things like positioning, branding, or persuasion worked effectively, a mediocre company with a good marketing arm (and deep pockets for advertising) could get by. Now, as consumers are becoming less influenced by quality proxies, and as more consumers base their decisions on their likely experience with a product, this is changing.”

But our brand love dies hard. If our brain can literally rewire the evidence from our own senses – how can we possibly make rational buying decisions? True, as Simonson and Rosen point out, we do tend to favor objective information when it’s available, but at the end of the day, our buying decisions still rely on an instrument that has proven itself unreliable in making optimal decisions under the influence of brand messaging.

If we’re prepared to let robots steer ships, drive trucks and run factories, why won’t we let them shop for us? Existing shopping bots stop well short of actually making the purchase. We’ll put our lives in the hands of A.I. in a myriad of ways, but we won’t hand our credit card over. Why is that?

It seems ironic to me. If there were any area where machines can beat humans, it would be in making purchases. They’re much better at filtering based on objective criteria, they can stay on top of all prices everywhere and they can instantly aggregate data from all similar types of purchases. Most importantly, machines can’t be tricked by branding or marketing. They can complete the Absolute Value loop Simonson and Rosen talk about in their book.

Of course, there’s just one little problem with all that. It essentially ends the entire marketing and advertising industry.

Ooops.

We Don’t Need More Athletes and Models – We Do Need More People Who Understand Complexity

Have you seen the Verizon ad?

 

The one that starts with LeBron James walking towards the camera. He tells us “We don’t need more LeBrons” He’s followed in quick succession by other celebrities, including model Adriana Lima, quarterback Drew Brees and soccer star David Villa, all saying we don’t need more of their kind. The ad wraps up by saying what we do need is more people in science and technology to fill the 4 million jobs available. Verizon is pitching in by supporting education in STEM subjects (Science, Technology, Engineering and Math). The world, apparently, needs a lot more engineers.

Fair enough. The world runs on science and technology. But there’s an unintended consequence that comes with that. Technology is making the world a more complex place. And what we really need is more people that understand what complexity means.

By complexity, I don’t mean complicated. Those are two different things. I mean complexity in its classic sense – coming from the Latin “com” – meaning “together” – and “plex” – meaning “woven”. “Woven together” is a pretty good starting point for understanding complexity. It’s a concept that depends on connection, and we are more connected than ever before. Whether we like it or not, with connection comes complexity. And when we’re talking about complexity, we’re talking about a whole new ball game where all traditional bets are off.

There’s another funny thing about complexity. It’s nothing new. The world has always been complex. Biology has long been the domain of complex adaptive systems. This is true of all of the physical sciences. Benoit Mandelbrot found fractal complexity in leaves and the coastline of England. Quantum physics has always been around. It wasn’t invented at the beginning of the last century by Max Plank, Albert Einstein and Niels Bohr. It just took us most of our history as a species to discover it, hiding there beneath the deceptively simple rules of Isaac Newton. Complexity has always been part of nature. We’ve just been ignoring it for a long, long time, believing with all our hearts in a simpler, more comprehensible world.

Humans hate complexity, because complexity brings with it unpredictability and an inherent lack of control. It leads naturally into chaos. We much prefer models with foreseeable outcomes. We have been trying for many years to predict the weather, with very limited success. Why? Because weather is complex and often chaotic. And it’s getting more so, not less.

But the extreme weather we’re seeing more and more of is analogous to many parts of our world. Complexity is rearing its head in more and more places. It lies beneath everything. In the words of the Santa Fe Institute, the self-proclaimed world headquarters for complexity science — “(they) endeavor to understand and unify the underlying, shared patterns in complex physical, biological, social, cultural, technological, and even possible astrobiological worlds”

Which means complexity is everywhere. It impacts everything. And almost none of us understand it. But we’ve got to figure this stuff out, because the stakes are huge.

Let’s take something as important to us as democracy, for instance.

There is nothing especially complex about the idea of democracy. But the model of democracy is a different beast, because it relies on the foundation of our society, which is incredibly complex. Democracy is dependent on unwritten rules, which are in turn dependent on conventions and controls that have been inherent in our society. These are what have been called the “soft guardrails of democracy”. And they are being eroded by our newly connected complexity. A few weeks ago, some of America’s top political scientists got together at Yale University to talk about democracy and almost all of them agreed – democracy is in deep trouble. Yascha Mounk, from Harvard, summed up their collective thoughts succinctly: “If current trends continue for another 20 or 30 years, democracy will be toast.”

So complexity is something we should be learning about. But where to start? And when? Currently, if people do study complexity science, it’s generally at the post-grad level. And that’s just a handful of people, at a few universities. We need to start understanding complexity and it’s implications much sooner. It should be covered in grade school. But there’s no one to teach it, because the majority of teachers have no idea what I’m talking about. In a recent dissertation, a researcher from the University of Pennsylvania asked science teachers in a number of schools in Singapore if they were familiar with complexity. The findings were disheartening, “a large sample of ninety Grades 11 and 12 science teachers in six randomly- selected schools across Singapore revealed as many as 80% of the teachers reported that they did not have prior knowledge or heard of complex systems.” By the way, Singapore is consistently rated best in the world for science education. Here in North America, we trail by a significant margin. If this is a problem there, it’s a bigger problem here.

If you’re old enough to remember the movie the Graduate, there was a scene where “the Graduate” – played by Dustin Hoffman – was wandering around his parent’s cocktail party when he was cornered by a family friend; Mr McGuire. McGuire offered a word of career advice. Literally – one word:

“I just want to say one word to you – just one word. Are you listening? Plastics.”

That was 50 years ago. Today, my word is “complexity.”

Are you listening?

157 Shades of Grey…

Design is important. Thinking through how people will respond to the aesthetics of your product is an admirable thing. I remember once having the pleasure of sharing a stage with JetBlue’s VP of Marketing – Amy Curtis-McIntyre. She was explaining how important good design was to the airline’s overall marketing strategy. A tremendous amount of thought went into the aesthetics of all their printed materials – even those cards explaining the safety features of the airplane that none of us ever read. But on JetBlue, not only did passengers read them – they stole them because they were so cleverly designed. Was this a problem for management? Not according to Amy:

“You know you’re doing something right when people steal your marketing shit”

So, I’m a fan of good design. But according to a recent story on Fastcodesign.com, Google is going at least 156 shades too far. They seem obsessed with color – or – at least, testing for colors. The design team for Google’s new home assistant – the Mini – had to pick three different colors for the home appliance. They wanted one to make a personal statement and apparently that statement is best made by the color “Coral.” Then they needed a color that would sit unobtrusively next to your TV set and that turned out to be “Charcoal.” Finally, they needed a “floater” color that could go anywhere in the house, including the kitchen. And that’s when the design team at Google may have gone off the tracks. They tested 157 shades of grey – yes – 157 – before they settled on “Chalk,” which is said to be the most inoffensive shade imaginable. They even worked with a textile firm to create their own custom cloth for the grill on top.

That beats Google’s previous obsessive-compulsive testing disorder record, set by then VP of Search Marissa Mayer when she ordered the design team to test 42 different shades of blue for search links to see which got the most clicks. At Google, good design seems to equal endless testing. But is there anything wrong with that?

Well, for one thing, you can test yourself into a rabbit hole, running endless tests and drowning in reams of data looking for the optimal solution – completely missing global maxima while myopically focused on the local. Google tests everything – and I mean everything – truly, madly and deeply. Even Google insiders admit this penchant for testing often gets them focused on the trees rather than the forest. This is particularly true for design. Google has a long history of obsessively turning out ho-hum designs.

Personally, when it comes to pure design magic, I much prefer the Apple approach. Led by Steve Job and Jon Ive’s unerring sense for the aesthetic – it’s hard to think of a longer run of spectacular product designs. Yes, they too sweated the small stuff. But those details were always in service of a higher vision – an empathetic, elegantly simple, friendly approach to product design that somehow magically connected with the user, leaving that user somewhat awed and consistently impressed. One might quibble with the technology that lies inside the package, but no one has put together a more beautiful package that the Apple design team at the height of their powers.

When you look at a Google product, you have the result of endless testing and data crunching. When you look at a classic Apple design, you sense that this came from more than simple testing. This came from intuition and creativity.

 

I, Robot….

Note: No Artificial Intelligence was involved in the creation of this column.

In the year 1942, science fiction writer Isaac Asimov introduced the 3 Rules of Robotics in his collection of short stories, I, Robot..

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov had the rules as coming from the Handbook of Robotics, 56th Edition, 2058 A.D. What was once an unimaginably distant time in the future is now knocking with increasing intensity on the door of the present. And Elon Musk, for one, is worried. “AI is a fundamental risk to the existence of human civilization.” Musk believes, Rules of Robotics or no, we won’t be able to control this genie once it gets out of its bottle.

Right now, the genie looks pretty benign. In the past year, the Washington Post has used robot reporters to write over 850 stories. The Post believes this is a win/win with their human reporters, because the robot, named Heliograf, can:

  • Cover stories that wouldn’t have been covered due to lack of human resources
  • Do the factual heavy lifting for human reporters
  • Alert humans to possible news stories in big data sets

So, should we fear or cheer robots? I think the Post’s experiment highlights two areas that AI excels at, and indicates how we might play nice with machines.

For AI to work effectively, the dots have to be pretty well sketched out. When they are, AI can be tireless in scouting out relevant facts and data where humans would tend to get bored easily. But humans are still much better at connecting those dots, especially when no obvious connection is apparent. We do it through something called intuition. It’s at least one area where we can still blow machines away.

Machines are also good at detecting patterns in overwhelming amounts of data. Humans tend to overfit…make the data fit our narratives. We’ll come back to this point in a minute, but for now, let’s go back to intuition. It’s still the trump card we humans hold. In 2008, Wired editor Chris Anderson prematurely (and, many believe, incorrectly) declared the Scientific Method dead, thanks to the massive data sets we now have available:

“We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

Anderson gets it partly right, but he also unfairly gives intuition short shrift. This is not a zero sum game. Intuition and A.I. can and should play nicely together. As I mentioned a few weeks ago, human intuition was found to boost the effectiveness of an optimization algorithm by 25%.

Evolutionary biologist Richard Dawkins recently came to the defense of intuition in Science, saying:

“Science proceeds by intuitive leaps of the imagination – building an idea of what might be true, and then testing it”

The very human problem comes when we let our imaginations run away from the facts, bending science to fit our hypotheses:

“It is important that scientists should not be so wedded to that intuition that they omit the very important testing stage.”

There is a kind of reciprocation here – an oscillation between phases. Humans are great at some stages – the ones that require intuition and imagination -and machines are better at others – where a cold and dispassionate analysis of the facts is required. Like most things in nature that pulse with a natural rhythm, the whole gains from the opposing forces at work here. It is a symphony with a beat and a counterbeat.

That’s why, for the immediate future anyway, machines should bend not to our will, but to our imagination.

The Assisted Reality of the New Marketer

Last week, MediaPost’s Laurie Sullivan warned us that the future of analytical number crunchers is not particularly rosy in the world of marketing. With cognitive technologies like IBM’s Watson coming on strong in more and more places, analytic skills are not that hot a commodity any more. Ironically, when it comes to marketing, the majority of companies have not planned to incorporate cognitive technologies in the near future. According to a report from IBM and Oxford Economics, only 24% of the organizations have a plan to incorporate CT in their own operations.

Another study, from Forrester, explored AI Marketing Readiness in Retail and eCommerce sectors. The state of readiness is a little better. In these typically forward thinking sectors, 72% are implementing AI marketing tech in the next year, but only 45% of those companies would consider themselves as excelling in at least 2 out of 3 dimensions of readiness.

If those numbers seem contradictory, we should understand what the difference between cognitive technology and artificial intelligence is. You’ll notice that IBM refers to Watson as “cognitive computing.” As Rob High, IBM’s CTO for Watson put it, “What it’s really about is involvement of a human in the loop,” and he described Watson as “augmented intelligence” rather than artificial intelligence.

That “human in the loop” is a critical difference between the two technologies. Whether we like it or not, machines are inevitable in the world of marketing, so we’d better start thinking about how to play nice with them.

 

I remember first seeing a video from the IBM Amplify summit at a MediaPost event last year. Although the presentation was a little stilted, the promise was intriguing. It showed a marketer musing about a potential campaign and throwing “what ifs” at Watson, who quickly responded with the almost instantly analyzed quantified answers. The premise of the video was to show how smart Watson was. But here’s a “what if” to consider. What if the real key to this was the hypotheticals that the human seemed to be pulling out of the blue? That doesn’t seem that impressive to us – certainly not as impressive as Watson’s corralling and crunching of relevant numbers in the blink of an eye. Musing is what we do. But this is just one example of something called Moravec’s Paradox.

Moravec’s Paradox, as stated by AI pioneer Marvin Minsky, is this: “In general, we’re least aware of what our minds do best. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly.” In other words, what we find difficult are the tasks that machines are well suited for, and the things we’re not even aware of are the things machines find notoriously hard to do. Things like intuition. And empathy. If we’re looking at the future of the human marketer, we’re probably looking at those two things.

In his book, Humans are Underrated, Geoff Colvin writes,

“Rather than ask what computers can’t do, it’s much more useful to ask what people are compelled to do—those things that a million years of evolution cause us to value and seek from other humans, maybe for a good reason, maybe for no reason, but it’s the way we are.”

We should be ensuring that both humans and machines are doing what they do best, essentially erasing Moravec’s Paradox. Humans focus on intuition and empathy and machines do the heavy lifting on the analyzing and number crunching. The optimal balance – at this point anyway – is a little bit of both.

In Descarte’s Error – neurologist Antonio Damasio showed that without human intuition and emotion – together with the corresponding physical cues he called somatic markers – we could rationalize ourselves into a never-ending spiral without ever coming to a conclusion. We need to be human to function effectively.

Researchers at MIT have even tried to include this into an algorithm. In 1954, Herbert Simon introduced a concept called bounded rationality. It may seem like this puts limits on the cognitive power of humans, but as programmers like to say, bounded rationality is a feature, not a bug. The researchers at MIT found that in an optimization challenge, such as finding the optimal routing strategy for an airline, humans have the advantage of being able to impose some intuitive limits on the number of options considered. For example, a human can say, “Planes should visit each city at the most once,” and thereby dramatically limit the number crunching required. When these intuitive strategies were converted to machine language and introduced into automated algorithms, those algorithms got 10 to 15% smarter.

When it comes right down to it, the essence of marketing is simply a conversation between two people. All the rest: the targeting, the automation, the segmentation, the media strategy – this is all just to add “mass” to marketing. And that’s all the stuff that machines are great at. For us humans, our future seems to rely on our past – and on our ability to connect with other humans.

Disruption in the Rear View Mirror

Oh..it’s so easy to be blasé. I always scan the Mediapost headlines each week to see if there’s anything to spin. I almost skipped right past a news post by Larissa Faw – Zenith: Google Remains Top-Ranked Media Company By Ad Revenue

“Of course Google is the top ranked media company,” I yawned as I was just about to click on the next email in my inbox. Then it hit me. To quote Michael Bublé, “Holy Shitballs, Mom!”

Maybe that headline doesn’t seem extraordinary in the context of today, but I’ve been doing this stuff for almost 20 years now, and in that context – well-it’s huge! I remembered a column I wrote ages ago about speculating that Google had barely scratched its potential. After a little digging, I found it. It was in October, 2006, so just over a decade ago. Google had just passed the 6 billion dollar mark in annual revenue. Ironically, that seemed a bigger deal then their current revenue of almost $80 billion seems today. In that column, I pushed to the extreme and speculated that Google could someday pass $200 billion in revenue. While we’re still only 1/3 of the way there, the claim doesn’t seem nearly as ludicrous as it did back then.

But here’s the line that really made me realize how far we’ve come in the ten and a half years since I wrote that column: “Google and Facebook together accounted for 20% of global advertising expenditure across all media in 2016, up from 11% in 2012. They were also responsible for 64% of all the growth in global ad spend between 2012 and 2016.”

Two companies that didn’t exist 20 years ago now account for 20% of all global advertising expenditure. And the speed with which they’re gobbling advertising budgets is accelerating. If you’re a dilettante student of disruption, as I am, those are pretty amazing numbers. In the day-to-day of Mediapost – and digital marketing in general – we tend to accept all this as normal. It’s like we’re surfing on top of a wave without realizing the wave is 300 freakin’ feet high. Sometimes, you need to zoom out a little to realizing how momentous the every day is. And if you look at this on a scale of decades rather than days, you start to get a sense that the speed of change is massive.

To me, the most interesting thing about this is that both Google and Facebook have introduced a fundamentally new relationship between advertising and it’s audience. Google’s outré is – of course – intent based advertising. And Facebook’s is based on socially mediated network effects. Both of these things required the overlay of digital connection. That – as they say – has made all the difference. And there is where the real disruption can be found. Our world has become a fundamentally different place.

Much as we remain focused on the world of advertising and marketing here in our little corner of the digital world, it behooves us to remember that advertising is simply a somewhat distorted reflection of the behaviors of the world in general. It things are being disrupted here, it is because things are being disrupted everywhere. As it regards us beings of flesh, bone and blood, that disruption has three distinct beachheads: the complicated relationship between our brains and the digital tools we have at our disposal, the way we connect with each other, and a dismantling of the restrictions of the physical world at the same time we build the scaffolding of a human designed digital world. Any one of these has the potential to change our species forever. With all three bearing down on us, permanent change is a lead-pipe cinch.

Thirty years is a nano-second in terms of human history. Even on the scale of my lifetime, it seems like yesterday. Reagan was president. We were terrorized by the Unabomber. News outlets were covering the Iran-Contra affair. U2 released the Joshua Tree. Platoon won the best picture Oscar. And if you wanted to advertise to a lot of people, you did so on a major TV network with the help of a Madison Avenue agency. 30 years ago, nothing of which I’m talking about existed. Nothing. No Google. No Facebook. No Internet – at least, not in a form any of us could appreciate.

As much as advertising has changed in the past 30 years, it has only done so because we – and the world we inhabit – have changed even more. And if that thought is a little scary, just think what the next 30 years might bring.