Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…

Which Me am I — And On Which Network?

I got an email from Strava. If you’re not familiar with it, Strava is a social network for cyclists and runners. As the former, I joined Strava about two years ago.

Here is the email I received:

Your Friends Are on Strava

 Add friends to follow their adventures and get inspired by their workouts

 J. Doe, Somewhere, CA

 “Follow”

 (Note: the personal information has been changed because after preaching about privacy for the last two weeks, I do have to practice what I preach)

Here’s the thing: I’m not friends with Mr. Doe. I met him a few  times on the speaking circuit when we crossed paths. To be brutally honest, J. Doe was a connection I thought would help me grow my business. He was a higher profile speaker than I was. He’d written a book that sold way more copies than mine ever did. I was “friending up” in my networking.

The last time we met each other — several years ago now — I quickly extended a Facebook friends invite. At the time, I — and the rest of the world — was using Facebook as a catch-all bucket for all my social connections: friends, family and the people I was unabashedly stalking in order to make more money. And J. Doe accepted my invite. It gave my ego a nice little boost at the time.

So, according to Facebook, we’re friends. But we’re not — not really. And that became clear when I got the Strava invite. It would have been really weird if I connected with him on Strava, following his adventures and being inspired by his workouts. We just don’t have that type of relationship. There was no social basis for me to make that connection.

I have different social spheres in my life. I have the remnants of my past professional life as an online marketer. I have my passion as a cyclist. I have a new emerging sphere as a fledgling tourism operator. I have my family.

I could go on. I can think of only a handful of people who comfortably lie within two or more of my spheres.

But with social sign-ins (which I used for Strava) those spheres are suddenly mashed together. It’s becoming clear that socially, we are complex creatures with many, many sides.

Facebook would love nothing more than to be the sole supporting platform of our entire social grid. But that works at cross purposes with how humans socialize. It’s not a monolithic, one-size-fits-all thing, but a sprawling landscape cluttered with very distinctive nodes that are haphazardly linked together.

The only common denominator is ourselves, in the middle of that mess. And even we can have surprising variability. The me that loves cycling is a very different guy from the me that wanted to grow my business profile.

This modality is creating an expansion of socially connected destinations.

Strava is a good example of this. Arguably, it provides a way to track my rides. But it also aspires to be the leading community of athletes. And that’s where it runs headlong into the problem of social modality.

Social sign-ins seem to be a win-win-win. For the user, it eases the headache of maintaining an ever-expanding list of user names and passwords. Sure, there’s that momentary lurch in the pit of our stomachs when we get that warning that we’re sharing our entire lives with the proprietors of the new site, but that goes away with just one little click.

For the website owner, every new social sign-in user comes complete with rich new data and access to all his contacts.  Finally, Facebook can sink their talons into us just a little deeper, gathering data from yet one more online outpost.

But like many things that seem beneficial, unintended consequences are part of the package. This is especially true when the third party I’m signing up for is creating his own community.

Is the “me” that wants to become part of this new community the “me” that Facebook thinks I am? Will things get weird when these two social spheres are mashed together?

Because Facebook assumes that I am always me and you are always you, whatever the context, some of us are forced to splinter our online social personas by maintaining multiple profiles. We may have a work profile and a social one.

The person Facebook thinks we are may be significantly different from the person LinkedIn thinks we are.  Keeping our social selves separate becomes a juggling act of ever-increasing proportions.

So why does Facebook want me to always be me?  It’s because of us — and by us, I mean marketers. We love the idea of markets that are universal and targeting that is omniscient. It just makes our lives so much easier. Our lives as marketers, I mean.

As people? Well, that’s another story — but right now, I’m a marketer.

See the problem?

What Price Privacy?

As promised, I’m picking up the thread from last week’s column on why we seem okay with trading privacy for convenience. The simple – and most plausible – answer is that we’re really not being given a choice.

As Mediapost Senior Editor Joe Mandese pointed out in an very on-point comment, what is being creating is an transactional marketplace where offers of value are exchanged for information.:

“Like any marketplace, you have to have your information represented in it to participate. If you’re not “listed” you cannot receive bids (offers of value) based on who you are.”

Amazon is perhaps the most relevant example of this. Take Alexa and Amazon Web Services (AWS). Alexa promises to “make your life easier and more fun.” But this comes at a price. Because Alexa is voice activated, it’s always listening. That means that privacy of anything we say in our homes has been ceded to Amazon through their terms of service. The same is true for Google Assist and Apple Siri.

But Amazon is pushing the privacy envelope even further as they test their new in-home delivery service – Amazon Key. In exchange for the convenience of having your parcels delivered inside your home when you’re away, you literally give Amazon the keys to your home. Your front door will have a smart door lock that can be opened via the remote servers of AWS. Opt in to this and suddenly you’ve given Amazon the right to not only listen to everything you say in your home but also to enter your home whenever they wish.

How do you feel about that?

This becomes the key question. How do we feel about the convenience/privacy exchange. But it turns out that our response depends in large part on how that question is framed. In a study conducted in 2015 by the Annenberg School for Communications at the University of Pennsylvania, researchers gathered responses from participants probing their sensitivity around the trading of privacy for convenience. Here is a sampling of the results:

  • 55% of respondents disagreed with the statement: “It’s OK if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
  • 71% disagreed with: “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.
  • 91% disagreed that: “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing”

Here, along the spectrum of privacy pushback, we start to see what the real problem is. We’re willing to exchange private information, as long as we’re aware of all that is happening and feel in control of it. But that, of course, is unrealistic. We can’t control it. And even if we could, we’d soon learn that the overhead required to do so is unmanageable. It’s why Vint Cerf said we’re going to have to learn to live with transparency.

Again, as Mr. Mandese points out, we’re really not being given a choice. Participating in the modern economy required us anteing up personal information. If we choose to remain totally private, we cut ourselves off from a huge portion of what’s available. And we are already at the point where the vast majority of us really can’t opt out. We all get pissed off when we hear of a security breach a la the recent Equifax debacle. Our privacy sensitivities are heightened for a day or two and we give lip service to outrage. But unless we go full out Old Order Amish, what are our choices?

We may rationalize the trade off by saying the private information we’re exchanging for services is not really that sensitive. But that’s where the potential threat of Big Data comes in. Gather enough seemingly innocent data and soon you can start predicting with startling accuracy the aspects of our lives that we are sensitive about. We run headlong into the Target Pregnant Teen dilemma. And that particular dilemma becomes thornier as the walls break down between data siloes and your personal information becomes a commodity on an open market.

The potential risk of trading away our privacy becomes an escalating aspect here – it’s the frog in boiling water syndrome. It starts innocently but can soon develop into a scenario that will keep most anyone up at night with the paranoiac cold sweats. Let’s say the data is used for targeting – singling us out of the crowd for the purpose of selling stuff to us. Or – in the case of governments – seeing if we have a proclivity for terrorism. Perhaps that isn’t so scary if Big Brother is benevolent and looking out for our best interests. But what if Big Brother becomes a bully?

There is another important aspect to consider here, and one that may have dire unintended consequences. When our personal data is used to make our world more convenient for us, that requires a “filtering” of that world by some type of algorithm to remove anything that algo determines to be irrelevant or uninteresting to us. Essentially, the entire physical world is “targeted” to us. And this can go horribly wrong, as we saw in the last presidential election. Increasingly we live in a filtered “bubble” determined by things beyond our control. Our views get trapped in an echo chamber and our perspective narrows.

But perhaps the biggest red flag is the fact that in signing away our privacy by clicking accept, we often also sign away any potential protection when things do go wrong. In another study called “The Biggest Lie on the Internet,” researchers found that when students were presented with a fictitious terms of service and privacy policy, 74% skipped reading it. And those that took the time to read didn’t take very much time – just 73 seconds on average. What almost no one caught were “gotcha clauses” about data sharing with the NSA and giving up your first-born child. While these were fictitious, real terms of service and privacy notifications often include clauses that include total control over the information gathered about you and giving up your right to sue if anything went bad. Even if you could sue, there might not be anyone left to sue. One analyst calculated that even if all the people who had their financial information stolen from Equifax won a settlement, it would actually amount to about $81 dollars.

 

Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?