Drawing a Line in the Sand for Net Privacy

Ever heard of Strava? The likelihood that you would say yes jumped astronomically on January 27, 2018. That was the day of the Strava security breach. Before that, you had probably never heard of it, unless you happened to be a cyclist or runner.

I’ve talked about Strava before. Then, I was talking about social modality and trying to keep our various selves straight on various social networks. Today, I’m talking about privacy.

Through GPS enabled devices, like a fitness tracker or smartphone, Strava enables you to track your workouts, include the routes you take. Once a year, they aggregate all these activities and publish it as a global heatmap. Over 1 billion workouts are mapped in every corner of the earth. If you zoom in enough, you’ll see my favorite cycling routes in the city I live in. The same is true for everyone who uses the app. Unless – of course – you’ve opted out of the public display of your workouts.

And therein lies the problem. Actually – two problems.

First, problem number one. There is really no reason I shouldn’t share my workouts. The worst you could find out is that I’m a creature of habit when working out. But if I’m a marine stationed at a secret military base in Afghanistan and I start my morning jogging around the perimeter of the base – well – now we have a problem. I just inadvertently highlighted my base on the map for the world to see. And that’s exactly what happened. When the heatmap went live, a university student in Australia happened to notice there were a number of hotspots in the middle of nowhere in Afghanistan and Syria.

On the problem number two. In terms of numbers affected, the Strava breach is a drop in the bucket when you compare it to Yahoo – or Equifax – or Target – or any of the other breaches that have made the news. But this breach was different in a very important way. The victims here weren’t individual consumers. This time national security was threatened. And that moved it beyond the typical “consumer beware” defense that typically gets invoked.

This charts new territory for privacy. The difference in perspective in this breach has heightened sensitivities and moved the conversation in a new direction. Typically, the response when there is a breach is:

  1. You should have known better
  2. You should have taken steps to protect your information; or,
  3. Hmmm, it sucks to be you

Somehow, this response has held up in the previous breaches despite the fact that we all know that it’s almost impossible to navigate the minefield of settings and preferences that lies between you and foolproof privacy. As long as the victims were individuals it was easy to shift blame. This time, however, the victim was the collective “we” and the topic was the hot button of all hot buttons – national security.

Now, one could and should argue that all of these might apply to the unfortunate soldier that decided to take his Fitbit on his run, but I don’t think it will end there. I think the current “opt out” approach to net privacy might have to be considered. The fact is, all these platforms would prefer to gather and have the right to use as they see fit as much of your data as possible. It opens up a number of monetization opportunities for them. Typically, the quid pro quo that is offered back to you – the user – is more functionality and the ability to share to your own social circle. The current ecosystems default starting point is to enable as much sharing and functionality as possible. Humans being human, we will usually go with the easiest option – the default – and only worry about it if something goes wrong.

But as users, we do have the right to push back. We have to realize that opening the full data pipe gives the platforms much more value than we ever receive in return. We’re selling off our own personal data for the modern day equivalent of beads and trinkets. And the traditional corporate response – “you can always opt out if you want” – is simply taking advantage of our own human limitations. The current fallback is that they’re introducing more transparency into their own approaches to privacy, making it easier to understand. While this is a step in the right direction, a more ethical approach would be to take an “opt in” approach, where the default is the maximum protection of our privacy and we have to make a conscious effort to lower that wall.

We’ll see. Opting in puts ethics and profitability on a collision course. For that reason, I can’t ever see the platforms going in that direction unless we insist.

 

 

Thinking Beyond the Brand

Apparently boring is the new gold standard of branding, at least when it comes to ranking countries on the international stage. According to a new report from US News, the Wharton School and Y&R’s BAV Group, Canada is the No. 2 country in the world. That’s right – Canada – the country that Robin Williams called “a really nice apartment over a meth lab.”

The methodology here is interesting. It was basically a brand benchmarking study. That’s what BAV does. They’re the “world’s largest and leading empirical study of brands” And Canada’s brand is: safe, slightly left leaning, polite, predictable and – yes – boring. Oh – and we have lakes and mountains.

Who, you may ask, beat us? Switzerland – a country that is safe, slightly left leading, polite, predictable and – yes – boring. Oh – and they have lakes and mountains too.

This study has managed to reduce entire countries to a type of cognitive short hand we call a brand. As a Canadian, I can tell you this country contains multitudes – some good, some bad – and remarkably little of it is boring. We’re like an iceberg (literally, in some months) – there’s a lot that lies under the surface. But as far as the world cares, you already know everything you need to know about Canada and no further learning is required.

That’s the problem with branding. We rely more and more on whatever brand perceptions we already have in place without thinking too much about whether they’re based on valid knowledge. We certainly don’t go out of our way to challenge those perceptions. What was originally intended to sell dish soap is being used as a cognitive short cut for everything we do. We rely on branding – instant know-ability – or what I called labelability in a previous column. We spend more and more of our time knowing and less and less of it learning.

Branding is a mental rot that is reducing everything to a broadly sketched caricature.

Take politics for example. That same BAV group turned their branding spotlight on candidates for the next presidential election. Y&R CEO David Sable explored just how important branding will be in 2020. Spoiler alert: it will be huge.

When BAV looked at the brands of various candidates, Trump continues to dominate. This was true in 2016, and depending on the variables of fate currently in play, it could be true in 2020 as well. “We showed how fresh and powerful President Trump was as a brand, and just how tired and weak Hillary was… despite having more esteem and stature.”

Sable prefaced his exploration with this warning: “What follows is not a political screed, endorsement or advocacy of any sort. It is more a questioning of ourselves, with some data thrown to add to the interrogative.” In other words, he’s saying that this is not really based on any type of rational foundation; it’s simply evaluating what people believe. And I find that particular mental decoupling to be troubling.

This idea of cognitive shorthand is increasingly prevalent in an attention deficit world. Everything is being reduced to a brand. The problem with this is that once that brand has been “branded” it’s very difficult to shake. Our world is being boiled down to branding and target marketing. Our brains have effectively become pigeon holed. That’s why Trump was right when he said, “I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn’t lose any voters”

We have a dangerous spiral developing. In a world with an escalating amount of information, we increasingly rely on brands/beliefs for our rationalization of the world. When we do expose ourselves to information, we rely on information that reinforces those brands and beliefs. Barack Obama identified this in a recent interview with David Letterman: “One of the biggest challenges we have to our democracy is the degree to which we don’t share a common baseline of facts. We are operating in completely different information universes. If you watch Fox News, you are living on a different planet than you are if you listen to NPR.”

Our information sources have to be “on-brand”. And those sources are filtered by algorithms shaped by our current beliefs. As our bubble solidifies, there is nary a crack left for a fresh perspective to sneak in.

 

Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…

Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?

The Retrofitting of Broadcasting

I returned to my broadcast school for a visit last week. Yes, it was nostalgic, but it was also kind of weird.

Here’s why…

I went to broadcast school in the early 80’s. The program I attended, at the Northern Alberta Institute of Technology, had just built brand new studios, outfitted with the latest equipment. We were the first group of students to get our hands on the stuff. Some of the local TV stations even borrowed our studio to do their own productions. SCTV – with the great John Candy, Catherine O’Hara, Eugene Levy, Rick Moranis and Andrea Martin – was produced just down the road at ITV. It was a heady time to be in TV. I don’t want to brag, but yeah, we were kind of a big deal on campus.

That was then. This was now. I went back for my first visit in 35 years, and nothing had really changed physically. The studios, the radio production suites, the equipment racks, the master control switcher – it was all still there – in all its bulky, behemoth-like glory. They hadn’t even changed the lockers. My old one was still down from Equipment Stores and right across from one of the classrooms.

The disruption of the past four decades was instantly crystallized. None of the students today touched any of that 80’s era technology – well – except for the locker. That was still functional. The rows and rows of switches, rotary pots, faders and other do-dads hadn’t been used in years. The main switching board served as a makeshift desk for a few computer monitors and a keyboard. The radio production suites were used to store old office chairs. The main studio; where we once taped interviews, music videos, multi-camera dramas, sketch comedies and even a staged bar fight? Yep, more storage.

The campus news show was still shot in the corner, but the rest of that once state-of-the-art studio was now a very expensive warehouse. The average iPhone today has more production capability than the sum total of all that analog wizardry. Why use a studio when all you need is a green wall?

I took the tour with my old friend Daryl, who is still in broadcasting. He is the anchor of the local 6 o’clock news. Along the way we ran into a couple of other old schoolmates who were now instructors. And we did what middle-aged guys do. We reminisced about the glory days. We roamed our old domain like dinosaurs ambling towards our own twilight.

When we entered the program, it was the hottest ticket in town. They had 10 potential students vying for every program seat available. Today, on a good year, it’s down to 2 to 1. On a bad year, everyone who applies gets in. The program has struggled to remain relevant in an increasingly digital world and now focuses on those who actually want to work in television news. All the other production we used to do has been moved to a digital production program.

We couldn’t know it at the time, but we were entering broadcasting just when broadcasting had reached the apex of its arc. You still needed bulk to be a broadcaster. An ENG camera (Electronic News Gathering) weighed in at a hefty 60 pounds plus, not including the extra battery belt. Now, all you need a smartphone and a YouTube account. The only thing produced at most local stations is the news. And the days are numbered for even that.

If you are middle aged like I am, your parents depend on TV for their news. For you, it’s an option – one of many places you can get it. You probably watch the 6 o’clock news more out of habit than anything. And your kids never watch it. I know mine don’t. According to the Pew Research Center, only 27% of those 18-29 turn to TV for their news. Half of them get their news online. In my age group, 72% of us still get our news from TV, with 29% of us turning online. The TV news audience is literally aging to death.

My friend Daryl sees the writing on the wall. Everybody in the business does. When I met his co-anchor and told her that I had taken the digital path, she said, “Ah, an industry with a future.”

Perhaps, but then again, I never got my picture on the side of a bus.

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

I, Robot….

Note: No Artificial Intelligence was involved in the creation of this column.

In the year 1942, science fiction writer Isaac Asimov introduced the 3 Rules of Robotics in his collection of short stories, I, Robot..

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov had the rules as coming from the Handbook of Robotics, 56th Edition, 2058 A.D. What was once an unimaginably distant time in the future is now knocking with increasing intensity on the door of the present. And Elon Musk, for one, is worried. “AI is a fundamental risk to the existence of human civilization.” Musk believes, Rules of Robotics or no, we won’t be able to control this genie once it gets out of its bottle.

Right now, the genie looks pretty benign. In the past year, the Washington Post has used robot reporters to write over 850 stories. The Post believes this is a win/win with their human reporters, because the robot, named Heliograf, can:

  • Cover stories that wouldn’t have been covered due to lack of human resources
  • Do the factual heavy lifting for human reporters
  • Alert humans to possible news stories in big data sets

So, should we fear or cheer robots? I think the Post’s experiment highlights two areas that AI excels at, and indicates how we might play nice with machines.

For AI to work effectively, the dots have to be pretty well sketched out. When they are, AI can be tireless in scouting out relevant facts and data where humans would tend to get bored easily. But humans are still much better at connecting those dots, especially when no obvious connection is apparent. We do it through something called intuition. It’s at least one area where we can still blow machines away.

Machines are also good at detecting patterns in overwhelming amounts of data. Humans tend to overfit…make the data fit our narratives. We’ll come back to this point in a minute, but for now, let’s go back to intuition. It’s still the trump card we humans hold. In 2008, Wired editor Chris Anderson prematurely (and, many believe, incorrectly) declared the Scientific Method dead, thanks to the massive data sets we now have available:

“We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

Anderson gets it partly right, but he also unfairly gives intuition short shrift. This is not a zero sum game. Intuition and A.I. can and should play nicely together. As I mentioned a few weeks ago, human intuition was found to boost the effectiveness of an optimization algorithm by 25%.

Evolutionary biologist Richard Dawkins recently came to the defense of intuition in Science, saying:

“Science proceeds by intuitive leaps of the imagination – building an idea of what might be true, and then testing it”

The very human problem comes when we let our imaginations run away from the facts, bending science to fit our hypotheses:

“It is important that scientists should not be so wedded to that intuition that they omit the very important testing stage.”

There is a kind of reciprocation here – an oscillation between phases. Humans are great at some stages – the ones that require intuition and imagination -and machines are better at others – where a cold and dispassionate analysis of the facts is required. Like most things in nature that pulse with a natural rhythm, the whole gains from the opposing forces at work here. It is a symphony with a beat and a counterbeat.

That’s why, for the immediate future anyway, machines should bend not to our will, but to our imagination.