Why Reality is in Deep Trouble

If 2017 was the year of Fake News, 2018 could well be the year of Fake Reality.

You Can’t Believe Your Eyes

I just saw Star Wars: The Last Jedi. When Carrie Fisher came on screen, I had to ask myself: Is this really her or is that CGI? I couldn’t remember if she had the chance to do all her scenes before her tragic passing last year. When I had a chance to check, I found that it was actually her. But the very fact that I had to ask the question is telling. After all, Star Wars Rogue One did resurrect Peter Cushing via CGI and he passed away 14 years ago.

CGI is not quite to the point where you can’t tell the difference between reality and computer generation, but it’s only a hair’s breadth away. It’s definitely to the point where you can no longer trust your eyes. And that has some interesting implications.

You Can Now Put Words in Anyone’s Mouth

The Rogue One Visual Effects head, John Knoll, had to fend off some pointed questions about the ethics of bringing a dead actor back to life. He defended the move by saying “We didn’t do anything Peter Cushing would have objected to. Whether you agree or not, the bigger question here is that they could have. They could have made the Cushing digital doppelganger do anything – and say anything – they wanted.

But It’s Not just Hollywood That Can Warp Reality

If fake reality comes out of Hollywood, we are prepared to cut it some slack. There is a long and slippery ethical slope that defines the entertainment landscape. In Rogue One’s case, it wasn’t using CGI, or even using CGI to represent a human. That includes a huge slice of today’s entertainment. It was using CGI to resurrect a dead actor and literally putting words in his mouth. That seemed to cross some ethical line in our perception of what’s real. But at the end of the day, this questionable warping of reality was still embedded in a fictional context.

But what if we could put words in the manufactured mouth of a sitting US president? That’s exactly what a team at Washington University did with Barack Obama, using Stanford’s Face2Face technology. They used a neural network to essentially create a lip sync video of Obama, with the computer manipulating images of his face to lip sync it to a sample of audio from another speech.

Being academics, they kept everything squeaky clean on the ethical front. All the words were Obama’s – it’s just that they were said at two different times. But those less scrupulous could easily synthesize Obama’s voice – or anyone’s – and sync it to video of them talking that would be indistinguishable from reality.

Why We Usually Believe Our Eyes

When it comes to a transmitted representation of reality, we accept video as the gold standard. Our brains believe what we see to be real. Of all our five senses, we trust sight the most to interpret what is real and what is fake. Photos used to be accepted as incontrovertible proof of reality, until Photoshop messed that up. Now, it’s video’s turn. Technology has handed us the tools that enable us to manufacture any reality we wish and distribute it in the form of video. And because it’s in that form, most everyone will believe it to be true.

Reality, Inc.

The concept of a universally understood and verifiable reality is important. It creates some type of provable common ground. We have always had our own ways of interpreting reality, but at the end of the day, the was typically some one and some way to empirically determine what was real, if we just bothered to look for it.

But we now run the risk of accepting manufactured reality as “good enough” for our purposes. In the past few years, we’ve discovered just how dangerous filtered reality can be. Whether we like it or not, Facebook, Google, YouTube and other mega-platforms are now responsible for how most of us interpret our world. These are for-profit organizations that really have no ethical obligation to attempt to provide a reasonable facsimile of reality. They have already outstripped the restraints of legislation and any type of ethical oversight. Now, these same platforms can be used to distribute media that are specifically designed to falsify reality. Of course, I should also mention that in return for access to all this, we give up a startling amount of information about ourselves. And that, according to UBC professor Taylor Owen, is deeply troubling:

“It means thinking very differently about the bargain that platforms are offering us. For a decade the deal has been that users get free services, and platforms get virtually unlimited collection of data about all aspects of our life and the ability to shape of the information we consume. The answer isn’t to disengage, as these tools are embedded in our society, but instead to think critically about this bargain.

“For example, is it worth having Facebook on your mobile phone in exchange for the immense tracking data about your digital and offline behaviour? Or is the free children’s content available on YouTube worth the data profile that is being built about your toddler, the horrific content that gets algorithmically placed into your child’s feed, and the ways in which A.I. are creating content for them and shaping what they view? Is the Amazon smart speaker in your living room worth providing Amazon access to everything you say in your home? For me, the answer is a resounding ‘no’.”

2018 could be an interesting year…

Which Me am I — And On Which Network?

I got an email from Strava. If you’re not familiar with it, Strava is a social network for cyclists and runners. As the former, I joined Strava about two years ago.

Here is the email I received:

Your Friends Are on Strava

 Add friends to follow their adventures and get inspired by their workouts

 J. Doe, Somewhere, CA

 “Follow”

 (Note: the personal information has been changed because after preaching about privacy for the last two weeks, I do have to practice what I preach)

Here’s the thing: I’m not friends with Mr. Doe. I met him a few  times on the speaking circuit when we crossed paths. To be brutally honest, J. Doe was a connection I thought would help me grow my business. He was a higher profile speaker than I was. He’d written a book that sold way more copies than mine ever did. I was “friending up” in my networking.

The last time we met each other — several years ago now — I quickly extended a Facebook friends invite. At the time, I — and the rest of the world — was using Facebook as a catch-all bucket for all my social connections: friends, family and the people I was unabashedly stalking in order to make more money. And J. Doe accepted my invite. It gave my ego a nice little boost at the time.

So, according to Facebook, we’re friends. But we’re not — not really. And that became clear when I got the Strava invite. It would have been really weird if I connected with him on Strava, following his adventures and being inspired by his workouts. We just don’t have that type of relationship. There was no social basis for me to make that connection.

I have different social spheres in my life. I have the remnants of my past professional life as an online marketer. I have my passion as a cyclist. I have a new emerging sphere as a fledgling tourism operator. I have my family.

I could go on. I can think of only a handful of people who comfortably lie within two or more of my spheres.

But with social sign-ins (which I used for Strava) those spheres are suddenly mashed together. It’s becoming clear that socially, we are complex creatures with many, many sides.

Facebook would love nothing more than to be the sole supporting platform of our entire social grid. But that works at cross purposes with how humans socialize. It’s not a monolithic, one-size-fits-all thing, but a sprawling landscape cluttered with very distinctive nodes that are haphazardly linked together.

The only common denominator is ourselves, in the middle of that mess. And even we can have surprising variability. The me that loves cycling is a very different guy from the me that wanted to grow my business profile.

This modality is creating an expansion of socially connected destinations.

Strava is a good example of this. Arguably, it provides a way to track my rides. But it also aspires to be the leading community of athletes. And that’s where it runs headlong into the problem of social modality.

Social sign-ins seem to be a win-win-win. For the user, it eases the headache of maintaining an ever-expanding list of user names and passwords. Sure, there’s that momentary lurch in the pit of our stomachs when we get that warning that we’re sharing our entire lives with the proprietors of the new site, but that goes away with just one little click.

For the website owner, every new social sign-in user comes complete with rich new data and access to all his contacts.  Finally, Facebook can sink their talons into us just a little deeper, gathering data from yet one more online outpost.

But like many things that seem beneficial, unintended consequences are part of the package. This is especially true when the third party I’m signing up for is creating his own community.

Is the “me” that wants to become part of this new community the “me” that Facebook thinks I am? Will things get weird when these two social spheres are mashed together?

Because Facebook assumes that I am always me and you are always you, whatever the context, some of us are forced to splinter our online social personas by maintaining multiple profiles. We may have a work profile and a social one.

The person Facebook thinks we are may be significantly different from the person LinkedIn thinks we are.  Keeping our social selves separate becomes a juggling act of ever-increasing proportions.

So why does Facebook want me to always be me?  It’s because of us — and by us, I mean marketers. We love the idea of markets that are universal and targeting that is omniscient. It just makes our lives so much easier. Our lives as marketers, I mean.

As people? Well, that’s another story — but right now, I’m a marketer.

See the problem?

What Price Privacy?

As promised, I’m picking up the thread from last week’s column on why we seem okay with trading privacy for convenience. The simple – and most plausible – answer is that we’re really not being given a choice.

As Mediapost Senior Editor Joe Mandese pointed out in an very on-point comment, what is being creating is an transactional marketplace where offers of value are exchanged for information.:

“Like any marketplace, you have to have your information represented in it to participate. If you’re not “listed” you cannot receive bids (offers of value) based on who you are.”

Amazon is perhaps the most relevant example of this. Take Alexa and Amazon Web Services (AWS). Alexa promises to “make your life easier and more fun.” But this comes at a price. Because Alexa is voice activated, it’s always listening. That means that privacy of anything we say in our homes has been ceded to Amazon through their terms of service. The same is true for Google Assist and Apple Siri.

But Amazon is pushing the privacy envelope even further as they test their new in-home delivery service – Amazon Key. In exchange for the convenience of having your parcels delivered inside your home when you’re away, you literally give Amazon the keys to your home. Your front door will have a smart door lock that can be opened via the remote servers of AWS. Opt in to this and suddenly you’ve given Amazon the right to not only listen to everything you say in your home but also to enter your home whenever they wish.

How do you feel about that?

This becomes the key question. How do we feel about the convenience/privacy exchange. But it turns out that our response depends in large part on how that question is framed. In a study conducted in 2015 by the Annenberg School for Communications at the University of Pennsylvania, researchers gathered responses from participants probing their sensitivity around the trading of privacy for convenience. Here is a sampling of the results:

  • 55% of respondents disagreed with the statement: “It’s OK if a store where I shop uses information it has about me to create a picture of me that improves the services they provide for me.”
  • 71% disagreed with: “It’s fair for an online or physical store to monitor what I’m doing online when I’m there, in exchange for letting me use the store’s wireless internet, or Wi-Fi, without charge.
  • 91% disagreed that: “If companies give me a discount, it is a fair exchange for them to collect information about me without my knowing”

Here, along the spectrum of privacy pushback, we start to see what the real problem is. We’re willing to exchange private information, as long as we’re aware of all that is happening and feel in control of it. But that, of course, is unrealistic. We can’t control it. And even if we could, we’d soon learn that the overhead required to do so is unmanageable. It’s why Vint Cerf said we’re going to have to learn to live with transparency.

Again, as Mr. Mandese points out, we’re really not being given a choice. Participating in the modern economy required us anteing up personal information. If we choose to remain totally private, we cut ourselves off from a huge portion of what’s available. And we are already at the point where the vast majority of us really can’t opt out. We all get pissed off when we hear of a security breach a la the recent Equifax debacle. Our privacy sensitivities are heightened for a day or two and we give lip service to outrage. But unless we go full out Old Order Amish, what are our choices?

We may rationalize the trade off by saying the private information we’re exchanging for services is not really that sensitive. But that’s where the potential threat of Big Data comes in. Gather enough seemingly innocent data and soon you can start predicting with startling accuracy the aspects of our lives that we are sensitive about. We run headlong into the Target Pregnant Teen dilemma. And that particular dilemma becomes thornier as the walls break down between data siloes and your personal information becomes a commodity on an open market.

The potential risk of trading away our privacy becomes an escalating aspect here – it’s the frog in boiling water syndrome. It starts innocently but can soon develop into a scenario that will keep most anyone up at night with the paranoiac cold sweats. Let’s say the data is used for targeting – singling us out of the crowd for the purpose of selling stuff to us. Or – in the case of governments – seeing if we have a proclivity for terrorism. Perhaps that isn’t so scary if Big Brother is benevolent and looking out for our best interests. But what if Big Brother becomes a bully?

There is another important aspect to consider here, and one that may have dire unintended consequences. When our personal data is used to make our world more convenient for us, that requires a “filtering” of that world by some type of algorithm to remove anything that algo determines to be irrelevant or uninteresting to us. Essentially, the entire physical world is “targeted” to us. And this can go horribly wrong, as we saw in the last presidential election. Increasingly we live in a filtered “bubble” determined by things beyond our control. Our views get trapped in an echo chamber and our perspective narrows.

But perhaps the biggest red flag is the fact that in signing away our privacy by clicking accept, we often also sign away any potential protection when things do go wrong. In another study called “The Biggest Lie on the Internet,” researchers found that when students were presented with a fictitious terms of service and privacy policy, 74% skipped reading it. And those that took the time to read didn’t take very much time – just 73 seconds on average. What almost no one caught were “gotcha clauses” about data sharing with the NSA and giving up your first-born child. While these were fictitious, real terms of service and privacy notifications often include clauses that include total control over the information gathered about you and giving up your right to sue if anything went bad. Even if you could sue, there might not be anyone left to sue. One analyst calculated that even if all the people who had their financial information stolen from Equifax won a settlement, it would actually amount to about $81 dollars.

 

Why We’re Trading Privacy for Convenience

In today’s world, increasingly quantified and tracked by the Internet of Things, we are talking a lot about privacy. When we stop to think about it, we are vociferously for privacy. But then we immediately turn around and click another “accept” box on a terms and conditions form that barters our personal privacy away, in increasingly large chunks. What we say and what we do are two very different things.

What is the deal with humans and privacy anyway? Why do we say is it important to us and why do we keep giving it away? Are we looking at the inevitable death of our concept of privacy?

Are We Hardwired for Privacy?

It does seem that – all things being equal – we favor privacy. But why?

There is an evolutionary argument for having some “me-time”. Privacy has an evolutionary advantage both when you’re most vulnerable to physical danger (on the toilet) or mating rivalry (having sex). If you can keep these things private, you’ll both live longer and have more offspring. So it’s not unusual for humans to be hardwired to desire a certain amount of privacy.

But our modern understanding of privacy actually conflates a number of concepts. There is protective privacy, the need for solitude and finally there’s our moral and ethical privacy. Each of these has different behavioral origins, but when we talk about our “right to privacy” we don’t distinguish between them. This can muddy the waters when we dig deep into our relationship with our privacy.

Blame England…

Let’s start with the last of these – our moral privacy. This is actually a pretty modern concept. Until 150 years ago, we as a species did pretty much everything communally. Our modern concept of privacy had its roots in the Industrial Revolution and Victorian England. There, the widespread availability of the patent lock and the introduction of the “private” room quickly led to a class-stratified quest for privacy. This was coupled with the moral rectitude of the time. Kate Kershner from howstuffworks.com explains:

“In the Victorian era, the “personal” became taboo; the gilded presentation of yourself and family was critical to social standing. Women were responsible for outward piety and purity, men had to exert control over inner desires and urges, and everyone was responsible for keeping up appearances.”

In Victorian England, privacy became a proxy for social status. Only the highest levels of the social elite could afford privacy. True, there was some degree of personal protection here that probably had evolutionary behavioral underpinnings, but it was all tied up in the broader evolutionary concept of social status. The higher your class, the more you could hide away the all-too-human aspects of your private life and thoughts. In this sense, privacy was not a right, but a status token that may be traded off for another token of equal or higher value. I suspect this is why we may say one thing but do another when it comes to our own privacy. There are other ways we determine status now.

Privacy vs Convenience

In a previous column, I wrote about how being busy is the new status symbol. We are defining social status differently and I think how we view privacy might be caught between how we used to recognize status and how we do it today. In 2013, Google’s Vint Cerf said that privacy may be a historical anomaly. Social libertarians and legislators were quick to condemn Cerf’s comment, but it’s hard to argue his logic. In Cerf’s words, transparency “is something we’re gonna have to live through.”

Privacy might still be a hot button topic for legislators but it’s probably dying not because of some nefarious plot against us but rather because we’re quickly trading it away. Busy is the new rich and convenience (or our illusion of convenience) allows us to do more things. Privacy may just be a tally token in our quest for social status and increasingly, we may be willing to trade it for more relevant tokens.  As Greg Ferenstein, author of the Ferenstein Wire, said in an exhaustive (and visually bountiful) post on the birth and death of privacy,

“Humans invariably choose money, prestige or convenience when it has conflicted with a desire for solitude.”

If we take this view, then it’s not so much how we lose our privacy that becomes important but who we’re losing it to. We seem all too willing to give up our personal data as long as two prerequisites are met: 1) We get something in return; and, 2) We have a little bit of trust in the holder of our data that they won’t use it for evil purposes.

I know those two points raise the hackles of many amongst you, but that’s where I’ll have to leave it for now. I welcome you to have the next-to-last word (because I’ll definitely be revisiting this topic). Is privacy going off the rails and, if so, why?

The Retrofitting of Broadcasting

I returned to my broadcast school for a visit last week. Yes, it was nostalgic, but it was also kind of weird.

Here’s why…

I went to broadcast school in the early 80’s. The program I attended, at the Northern Alberta Institute of Technology, had just built brand new studios, outfitted with the latest equipment. We were the first group of students to get our hands on the stuff. Some of the local TV stations even borrowed our studio to do their own productions. SCTV – with the great John Candy, Catherine O’Hara, Eugene Levy, Rick Moranis and Andrea Martin – was produced just down the road at ITV. It was a heady time to be in TV. I don’t want to brag, but yeah, we were kind of a big deal on campus.

That was then. This was now. I went back for my first visit in 35 years, and nothing had really changed physically. The studios, the radio production suites, the equipment racks, the master control switcher – it was all still there – in all its bulky, behemoth-like glory. They hadn’t even changed the lockers. My old one was still down from Equipment Stores and right across from one of the classrooms.

The disruption of the past four decades was instantly crystallized. None of the students today touched any of that 80’s era technology – well – except for the locker. That was still functional. The rows and rows of switches, rotary pots, faders and other do-dads hadn’t been used in years. The main switching board served as a makeshift desk for a few computer monitors and a keyboard. The radio production suites were used to store old office chairs. The main studio; where we once taped interviews, music videos, multi-camera dramas, sketch comedies and even a staged bar fight? Yep, more storage.

The campus news show was still shot in the corner, but the rest of that once state-of-the-art studio was now a very expensive warehouse. The average iPhone today has more production capability than the sum total of all that analog wizardry. Why use a studio when all you need is a green wall?

I took the tour with my old friend Daryl, who is still in broadcasting. He is the anchor of the local 6 o’clock news. Along the way we ran into a couple of other old schoolmates who were now instructors. And we did what middle-aged guys do. We reminisced about the glory days. We roamed our old domain like dinosaurs ambling towards our own twilight.

When we entered the program, it was the hottest ticket in town. They had 10 potential students vying for every program seat available. Today, on a good year, it’s down to 2 to 1. On a bad year, everyone who applies gets in. The program has struggled to remain relevant in an increasingly digital world and now focuses on those who actually want to work in television news. All the other production we used to do has been moved to a digital production program.

We couldn’t know it at the time, but we were entering broadcasting just when broadcasting had reached the apex of its arc. You still needed bulk to be a broadcaster. An ENG camera (Electronic News Gathering) weighed in at a hefty 60 pounds plus, not including the extra battery belt. Now, all you need a smartphone and a YouTube account. The only thing produced at most local stations is the news. And the days are numbered for even that.

If you are middle aged like I am, your parents depend on TV for their news. For you, it’s an option – one of many places you can get it. You probably watch the 6 o’clock news more out of habit than anything. And your kids never watch it. I know mine don’t. According to the Pew Research Center, only 27% of those 18-29 turn to TV for their news. Half of them get their news online. In my age group, 72% of us still get our news from TV, with 29% of us turning online. The TV news audience is literally aging to death.

My friend Daryl sees the writing on the wall. Everybody in the business does. When I met his co-anchor and told her that I had taken the digital path, she said, “Ah, an industry with a future.”

Perhaps, but then again, I never got my picture on the side of a bus.

Why I Go to a Store

I hate shopping. Let me clarify. I hate the physical experience of shopping. I find no joy in a mall. I avoid department stores like the plague. If I can buy it online, I will.

Except..I don’t, always.

Why is that? I should be the gold standard of e-commerce targets. And most of the time, I am. Except when I’m not. Take home improvement stuff, for instance. I still drive down to my local Home Depot, even though I can order online.

As prognosticators of the online space, we’ve been busy hammering the nails in the coffin of bricks and mortar retail for a while. In a recent story in the Atlantic, E-tail was called the perfect match for the emerging sloth of the first world consumer: “E-commerce is soaring and food-delivery businesses are taking off because human beings are fundamentally lazy and they don’t want to leave the couch to buy stuff.”

That makes sense. But while the smart bets seem to be placed on a consumer stampede heading towards e-tail, Amazon just invested 13.7 billion in buying Whole Foods Market. So if bricks and mortar retail is dead, why the hell did Amazon buy almost 500 more physical stores? That same Atlantic article does a pretty thorough job of answering this question, offering three compelling reasons:

  • To dominate the food delivery market
  • To create an instant fulfillment network
  • To broaden Amazon’s footprint within the consumption habits of affluent Americans

I can buy that. The second point in particular seems to make eminent sense. If I know something is in stock at my local store and I need it right now, I’ll make the trip. And Amazon is currently struggling to deliver the last mile of fulfillment. But I keep going back to my original question: why do I – a man who detests the physical act of shopping – still decide to go to a store more often than I probably want to?

There has been various strategies put forward for the salvation. In a recent post on Mediapost, Mahesh Krishna said Personalization was the answer – use data to tailor an in-store experience. I myself wrote something similar in a previous post about Amazon testing the waters of a bricks and mortar retail environment. But there’s nothing personalized about Home Depot. I’m anonymous til I get to the till. So for me, anyway, that doesn’t seem to explain why.

Experiential shopping is another proffered recipe for the salvation of retail. A recent article from Wharton cited an Italian culinary themed retail success story: “Another experiential success… is Eataly, a chain of Italian marketplaces that combines restaurants, grocery stores and cooking schools. It capitalizes on the appeal of Italian culture and sophistication. ‘It all works together like a little universe,’ she says. ‘There’s a nice synergy there; you can taste the foods in the restaurant … you might then go to the grocery store to buy it so you can make it at home.’

But how much “experience” do I really need in my shopping? The answer is not a lot. As undeniably fantastico as Eataly is, for me it would be a 3 to 4 times a year visit. And let’s face it – the retail niches that suit this over-the-top experiential approach are limited. No, there needs to be a more pragmatic reason why I’ll actually drag my butt away from a screen and down to the local mercantile.

I realized, when I really examined the reasons why I usually go to the store, they all had to do with risk. I go to the store when I’m afraid that stuff could go wrong:

  1. When I’m unsure what I need
  2. When I’m afraid I may have to return what I bought
  3. When I have to ask a question about use of something I want to buy

For me, bricks and mortar shopping is usually nothing more than a risk-mitigation strategy, pure and simple. And I suspect I’m not alone. Apple Stores are often cited as an example of experiential shopping, but I believe the real genius of this retail success story is the Genius Bar. The jigsaw puzzle integration of the All Things Apple universe can be a daunting prospect. Having an actual human to guide you through the process is reassuring, and reassurance is most effective when it’s face-to-face. That’s why I go to a store.

 

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Will We Ever Let Robots Shop for Us?

Several years ago, my family and I visited Astoria, Oregon. You’ll find it at the mouth of the Columbia River, where it empties into the Pacific. We happened to take a tour of Astoria and our guide pointed out a warehouse. He told us it was filled with canned salmon, waiting to be labeled and shipped. I asked what brand they were. His answer was “All of them. They all come from the same warehouse. The only thing different is the label.”

Ahh… the power of branding…

Labels can make a huge difference. If you need proof, look no further than the experimental introduction of generic brands in grocery stores. Well, they were generic to begin with, anyway. But over time, the generic “yellow label” was replaced with a plethora of store brands. The quality of what’s inside the box hasn’t changed much, but the packaging has. We do love our brands.

But there’s often no rational reason to do so. Take the aforementioned canned salmon for example. Same fish, no matter what label you may stick on it. Brands are a trick our brain plays on us. We may swear our favorite brand tastes better than it’s competitors, but it’s usually just our brain short circuiting our senses and our sensibility. Neuroscientist Read Montague found this out when he redid the classic Pepsi taste test using a fMRI scanner. The result? When Coke drinkers didn’t know what they were drinking, the majority preferred Pepsi. But the minute the brand was revealed, they again sweared allegiance to Coke. The taste hadn’t changed, but their brains had. As soon as the brain was aware of the brand, some parts of it suddenly started lighting up like a pinball machine.

In previous research we did, we found that the brain instantly responded to favored brains the same way it did to a picture of a friend or a smiling face. Our brains have an instantaneous and subconscious response to brands. And because of that, our brains shouldn’t be trusted with buying decisions. We’d be better off letting a robot do it for us.

And I’m not saying that facetiously.

A recent post on Bloomberg.com looked forward 20 years and predicted how automation would gradually take over ever step of the consumer product supply chain, from manufacturing to shipping to delivery to our door. The post predicts that the factory floor, the warehouse, ocean liners, trucks and delivery drones will all be powered by Artificial intelligence and robotic labor. The first set of human hands that might touch a product would be those of the buyer. But maybe we’re automating the wrong side of the consumer transaction. The thing human hands shouldn’t be touching is the buy button. We suck at it.

We have taken some steps in the right direction. Itamar Simonson and Emanuel Rosen predicted a death of branding in their book Absolute Value:

“In the past the marketing function “protected” the organization in some cases. When things like positioning, branding, or persuasion worked effectively, a mediocre company with a good marketing arm (and deep pockets for advertising) could get by. Now, as consumers are becoming less influenced by quality proxies, and as more consumers base their decisions on their likely experience with a product, this is changing.”

But our brand love dies hard. If our brain can literally rewire the evidence from our own senses – how can we possibly make rational buying decisions? True, as Simonson and Rosen point out, we do tend to favor objective information when it’s available, but at the end of the day, our buying decisions still rely on an instrument that has proven itself unreliable in making optimal decisions under the influence of brand messaging.

If we’re prepared to let robots steer ships, drive trucks and run factories, why won’t we let them shop for us? Existing shopping bots stop well short of actually making the purchase. We’ll put our lives in the hands of A.I. in a myriad of ways, but we won’t hand our credit card over. Why is that?

It seems ironic to me. If there were any area where machines can beat humans, it would be in making purchases. They’re much better at filtering based on objective criteria, they can stay on top of all prices everywhere and they can instantly aggregate data from all similar types of purchases. Most importantly, machines can’t be tricked by branding or marketing. They can complete the Absolute Value loop Simonson and Rosen talk about in their book.

Of course, there’s just one little problem with all that. It essentially ends the entire marketing and advertising industry.

Ooops.

Bias, Bug or Feature?

When we talk about artificial intelligence, I think of a real time Venn diagram in motion. One side is the sphere of all human activity. This circle is huge. The other side is the sphere of artificial intelligent activity. It’s growing exponentially. And the overlap area between the two is also expanding at the same rate. It’s this intersection between the two spheres that fascinates me. What are the rules that govern interplay between humans and machines?

Those rules necessarily depend on what the nature of the interplay is. For the sake of this column, let’s focus on those researchers and developers that are trying to make machines act more like humans. Take Jibo, for example. Jibo is “the first social robot for the home.” Jibo tells jokes, answers questions, understands nuanced language and recognizes your face. It’s just one more example of artificial intelligence that’s intended to be a human companion. And as we’re building machines that are more human, we’re finding is that many of the things we thought were human foibles are actually features that have developed for reasons that were at one time perfectly valid.

Trevor Paglin is a winner of the MacArthur Genius Grant. His latest project is to see what AI sees when it’s looking at us: “What are artificial intelligence systems actually seeing when they see the world?” What is interesting about this is that when machines see the world, they use machine-like reasoning to make sense of it. For example, Paglin fed hundreds of images of fellow artist Hito Steyerl into a face-analyzing algorithm. In one instance, she was evaluated as “74% female”.

This highlights a fundamental difference in how machines and humans see the world. Machines calculate probabilities. So do we, but that happens behind the scenes and it’s only part of how we understand the world. Operating a level higher than that we use meta-signatures; categorization for example – to quickly compartmentalize and understand the world. We would know immediately that Hito was a woman. We wouldn’t have to crunch the probabilities. By the way, we do the same thing with race.

But is this a feature or a bug? Paglin has his opinion, “I would argue that racism, for example, is a feature of machine learning—it’s not a bug,” he says. “That’s what you’re trying to do: you’re trying to differentiate between people based on metadata signatures and race is like the biggest metadata signature around. You’re not going to get that out of the system.”

Whether we like it or not, our inherent racism was a useful feature many thousands of years ago. It made us naturally wary of other tribes competing for the same natural resources. As much as it’s abhorrent to most of us now, it’s still a feature that we can’t “get out of the system.”

This highlights a danger in this overlap area between humans and machines. If we want machines to think as we do, we’re going to have to equip them with some of our biases. As I’ve mentioned before, there are some things that humans do well, or, at least; that we do better than machines. And there are things machines do infinitely better than we. Perhaps we shouldn’t to try to merge these two. If we’re trying to get machines to do what humans do, are we prepared to program racism, misogyny, intolerance, bias and greed into the operating system? All these things are part of being human, whether we like to admit it or not.

But there are other areas that are rapidly falling into the overlap zone of my imaginary Venn diagram. Take business strategy, for example. A recent study from CapGemini showed that 79% of organizations implementing AI feel it’s bringing new insights and better data analysis, 74% that it makes their organizations more creative and 71% feel it’s helping make better management decisions. A friend of mine recently brought this to my attention along with what was for him an uncharacteristic rant: “I really would’ve hoped senior executives might’ve thought creativity and better management decisions were THEIR GODDAMN JOB and not be so excited about being able to offload those dreary functions to AI’s which are guaranteed to be imbued with the biases of their creators or, even worse, unintended biases resulting from bad data or any of the untold messy parts of life that can’t be cleanly digitized.”

My friend hit the proverbial nail on the proverbial head – those “untold messy parts of life” are the things we have evolved to deal with, and the way we deal with them are not always admirable. But in the adaptive landscape we all came from, they were proven to work. We still carry that baggage with us. But is it right to transfer that baggage to algorithms in order to make them more human? Or should we be aiming for a blank slate?

When Technology Makes Us Better…

I’m always quick to point out the darker sides of technology. So, to be fair, I should also give credit where credit is due. That’s what today’s column is about. Technology, we collectively owe you one. Why? Because without you, we wouldn’t be slowly chipping away at the massive issue of sexual predation. #Metoo couldn’t have happened without you.

I’ve talked before of Mark Granovetter’s threshold model of crowd behavior. In the past, I’ve used it to explain how it can tip collective behavior towards the negative; turning crowds into mobs. But it can also work the other way; turning crowds into movements. Either way, the threshold model depends on connection and technology makes that connecting possible. What’s more, it makes it possible in a very specific way that is important to understand.

Technological connection is often ideological connection. We connect in ad hoc social networks that center around an idea. We find common ground that is not physical but conceptual. In the process, we forge new social connections that are freed from the typical constraints that introduce friction in the growth of social networks. We create links that are unrestricted by how people look, where they live, how much they earn or what church they worship at. All we need is to find resonance within ideas and we can quickly create a viral wave. The cost of connection is reduced.

This is no way diminishes the courage required to post the #metoo hashtag. I have been in the digital world for almost three decades now and in that time I have met many, many remarkable women. I hope I have judged them as fellow human beings and have treated them as equals. It has profoundly saddened me to see most of them join the #metoo movement in the past few weeks. It has been painful to learn just how pervasive the problem is and to see this light creep into a behavioral basement of which we are becoming more aware. But it is oh-so-necessary. And I must believe that technology and the comfort it affords by letting you know you’re not alone has made it just a little bit easier to type those six characters.

As I have always said – technology erases friction. It breaks down those sticking points that used to allow powerful individuals to exert control. Control is needed to maintain those circles of complicity that allows the Harvey Weinsteins of the world to prey on others. But with technology, all we need is one little crack in that circle to set in motion a chain reaction that blasts it apart.

I believe that the Weinstein example will represent a sea-change moment in how our society views sexual predation. These behaviors are always part of a power game. For it to continue to exist, the perpetrator must believe in their own power and their ability to maintain it. Once the power goes, so does the predation. #Metoo has shown that your power can disappear immediately and permanently if you get publically tagged. “If it happened to Harvey, it could happen to me” may become the new cautionary tale.

But I hope it’s not just the fear of being caught that pushes us to be better. I also hope that we have learned that it’s not okay to tolerate this. In the incredibly raw and honest post of screenwriter Scott Rosenberg, we had our worst fears confirmed: “Everybody f—ing knew!” And everybody who knew is being sucked into the whirlpool of Harvey’s quickly sinking bulk. I have to believe this is tipping the balance in the right direction. We good men (and women) might be less likely to do nothing next time.

Finally, technology has made us better, whether we believe it or not. In 1961, when I was born, Weinstein’s behavior would have been accepted as normal. It would have even been considered laudable in some circles (predominately male circles – granted). As a father of two daughters, I am grateful that that’s not the world we live in today. The locker room mentality that allows the Harvey Weinsteins, Robert Scobles, and Donald Trumps of the world to flourish is being chipped away – #metoo post by #metoo post.

And we have technology to thank for that.