The Retrofitting of Broadcasting

I returned to my broadcast school for a visit last week. Yes, it was nostalgic, but it was also kind of weird.

Here’s why…

I went to broadcast school in the early 80’s. The program I attended, at the Northern Alberta Institute of Technology, had just built brand new studios, outfitted with the latest equipment. We were the first group of students to get our hands on the stuff. Some of the local TV stations even borrowed our studio to do their own productions. SCTV – with the great John Candy, Catherine O’Hara, Eugene Levy, Rick Moranis and Andrea Martin – was produced just down the road at ITV. It was a heady time to be in TV. I don’t want to brag, but yeah, we were kind of a big deal on campus.

That was then. This was now. I went back for my first visit in 35 years, and nothing had really changed physically. The studios, the radio production suites, the equipment racks, the master control switcher – it was all still there – in all its bulky, behemoth-like glory. They hadn’t even changed the lockers. My old one was still down from Equipment Stores and right across from one of the classrooms.

The disruption of the past four decades was instantly crystallized. None of the students today touched any of that 80’s era technology – well – except for the locker. That was still functional. The rows and rows of switches, rotary pots, faders and other do-dads hadn’t been used in years. The main switching board served as a makeshift desk for a few computer monitors and a keyboard. The radio production suites were used to store old office chairs. The main studio; where we once taped interviews, music videos, multi-camera dramas, sketch comedies and even a staged bar fight? Yep, more storage.

The campus news show was still shot in the corner, but the rest of that once state-of-the-art studio was now a very expensive warehouse. The average iPhone today has more production capability than the sum total of all that analog wizardry. Why use a studio when all you need is a green wall?

I took the tour with my old friend Daryl, who is still in broadcasting. He is the anchor of the local 6 o’clock news. Along the way we ran into a couple of other old schoolmates who were now instructors. And we did what middle-aged guys do. We reminisced about the glory days. We roamed our old domain like dinosaurs ambling towards our own twilight.

When we entered the program, it was the hottest ticket in town. They had 10 potential students vying for every program seat available. Today, on a good year, it’s down to 2 to 1. On a bad year, everyone who applies gets in. The program has struggled to remain relevant in an increasingly digital world and now focuses on those who actually want to work in television news. All the other production we used to do has been moved to a digital production program.

We couldn’t know it at the time, but we were entering broadcasting just when broadcasting had reached the apex of its arc. You still needed bulk to be a broadcaster. An ENG camera (Electronic News Gathering) weighed in at a hefty 60 pounds plus, not including the extra battery belt. Now, all you need a smartphone and a YouTube account. The only thing produced at most local stations is the news. And the days are numbered for even that.

If you are middle aged like I am, your parents depend on TV for their news. For you, it’s an option – one of many places you can get it. You probably watch the 6 o’clock news more out of habit than anything. And your kids never watch it. I know mine don’t. According to the Pew Research Center, only 27% of those 18-29 turn to TV for their news. Half of them get their news online. In my age group, 72% of us still get our news from TV, with 29% of us turning online. The TV news audience is literally aging to death.

My friend Daryl sees the writing on the wall. Everybody in the business does. When I met his co-anchor and told her that I had taken the digital path, she said, “Ah, an industry with a future.”

Perhaps, but then again, I never got my picture on the side of a bus.

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Will We Ever Let Robots Shop for Us?

Several years ago, my family and I visited Astoria, Oregon. You’ll find it at the mouth of the Columbia River, where it empties into the Pacific. We happened to take a tour of Astoria and our guide pointed out a warehouse. He told us it was filled with canned salmon, waiting to be labeled and shipped. I asked what brand they were. His answer was “All of them. They all come from the same warehouse. The only thing different is the label.”

Ahh… the power of branding…

Labels can make a huge difference. If you need proof, look no further than the experimental introduction of generic brands in grocery stores. Well, they were generic to begin with, anyway. But over time, the generic “yellow label” was replaced with a plethora of store brands. The quality of what’s inside the box hasn’t changed much, but the packaging has. We do love our brands.

But there’s often no rational reason to do so. Take the aforementioned canned salmon for example. Same fish, no matter what label you may stick on it. Brands are a trick our brain plays on us. We may swear our favorite brand tastes better than it’s competitors, but it’s usually just our brain short circuiting our senses and our sensibility. Neuroscientist Read Montague found this out when he redid the classic Pepsi taste test using a fMRI scanner. The result? When Coke drinkers didn’t know what they were drinking, the majority preferred Pepsi. But the minute the brand was revealed, they again sweared allegiance to Coke. The taste hadn’t changed, but their brains had. As soon as the brain was aware of the brand, some parts of it suddenly started lighting up like a pinball machine.

In previous research we did, we found that the brain instantly responded to favored brains the same way it did to a picture of a friend or a smiling face. Our brains have an instantaneous and subconscious response to brands. And because of that, our brains shouldn’t be trusted with buying decisions. We’d be better off letting a robot do it for us.

And I’m not saying that facetiously.

A recent post on Bloomberg.com looked forward 20 years and predicted how automation would gradually take over ever step of the consumer product supply chain, from manufacturing to shipping to delivery to our door. The post predicts that the factory floor, the warehouse, ocean liners, trucks and delivery drones will all be powered by Artificial intelligence and robotic labor. The first set of human hands that might touch a product would be those of the buyer. But maybe we’re automating the wrong side of the consumer transaction. The thing human hands shouldn’t be touching is the buy button. We suck at it.

We have taken some steps in the right direction. Itamar Simonson and Emanuel Rosen predicted a death of branding in their book Absolute Value:

“In the past the marketing function “protected” the organization in some cases. When things like positioning, branding, or persuasion worked effectively, a mediocre company with a good marketing arm (and deep pockets for advertising) could get by. Now, as consumers are becoming less influenced by quality proxies, and as more consumers base their decisions on their likely experience with a product, this is changing.”

But our brand love dies hard. If our brain can literally rewire the evidence from our own senses – how can we possibly make rational buying decisions? True, as Simonson and Rosen point out, we do tend to favor objective information when it’s available, but at the end of the day, our buying decisions still rely on an instrument that has proven itself unreliable in making optimal decisions under the influence of brand messaging.

If we’re prepared to let robots steer ships, drive trucks and run factories, why won’t we let them shop for us? Existing shopping bots stop well short of actually making the purchase. We’ll put our lives in the hands of A.I. in a myriad of ways, but we won’t hand our credit card over. Why is that?

It seems ironic to me. If there were any area where machines can beat humans, it would be in making purchases. They’re much better at filtering based on objective criteria, they can stay on top of all prices everywhere and they can instantly aggregate data from all similar types of purchases. Most importantly, machines can’t be tricked by branding or marketing. They can complete the Absolute Value loop Simonson and Rosen talk about in their book.

Of course, there’s just one little problem with all that. It essentially ends the entire marketing and advertising industry.

Ooops.

Bias, Bug or Feature?

When we talk about artificial intelligence, I think of a real time Venn diagram in motion. One side is the sphere of all human activity. This circle is huge. The other side is the sphere of artificial intelligent activity. It’s growing exponentially. And the overlap area between the two is also expanding at the same rate. It’s this intersection between the two spheres that fascinates me. What are the rules that govern interplay between humans and machines?

Those rules necessarily depend on what the nature of the interplay is. For the sake of this column, let’s focus on those researchers and developers that are trying to make machines act more like humans. Take Jibo, for example. Jibo is “the first social robot for the home.” Jibo tells jokes, answers questions, understands nuanced language and recognizes your face. It’s just one more example of artificial intelligence that’s intended to be a human companion. And as we’re building machines that are more human, we’re finding is that many of the things we thought were human foibles are actually features that have developed for reasons that were at one time perfectly valid.

Trevor Paglin is a winner of the MacArthur Genius Grant. His latest project is to see what AI sees when it’s looking at us: “What are artificial intelligence systems actually seeing when they see the world?” What is interesting about this is that when machines see the world, they use machine-like reasoning to make sense of it. For example, Paglin fed hundreds of images of fellow artist Hito Steyerl into a face-analyzing algorithm. In one instance, she was evaluated as “74% female”.

This highlights a fundamental difference in how machines and humans see the world. Machines calculate probabilities. So do we, but that happens behind the scenes and it’s only part of how we understand the world. Operating a level higher than that we use meta-signatures; categorization for example – to quickly compartmentalize and understand the world. We would know immediately that Hito was a woman. We wouldn’t have to crunch the probabilities. By the way, we do the same thing with race.

But is this a feature or a bug? Paglin has his opinion, “I would argue that racism, for example, is a feature of machine learning—it’s not a bug,” he says. “That’s what you’re trying to do: you’re trying to differentiate between people based on metadata signatures and race is like the biggest metadata signature around. You’re not going to get that out of the system.”

Whether we like it or not, our inherent racism was a useful feature many thousands of years ago. It made us naturally wary of other tribes competing for the same natural resources. As much as it’s abhorrent to most of us now, it’s still a feature that we can’t “get out of the system.”

This highlights a danger in this overlap area between humans and machines. If we want machines to think as we do, we’re going to have to equip them with some of our biases. As I’ve mentioned before, there are some things that humans do well, or, at least; that we do better than machines. And there are things machines do infinitely better than we. Perhaps we shouldn’t to try to merge these two. If we’re trying to get machines to do what humans do, are we prepared to program racism, misogyny, intolerance, bias and greed into the operating system? All these things are part of being human, whether we like to admit it or not.

But there are other areas that are rapidly falling into the overlap zone of my imaginary Venn diagram. Take business strategy, for example. A recent study from CapGemini showed that 79% of organizations implementing AI feel it’s bringing new insights and better data analysis, 74% that it makes their organizations more creative and 71% feel it’s helping make better management decisions. A friend of mine recently brought this to my attention along with what was for him an uncharacteristic rant: “I really would’ve hoped senior executives might’ve thought creativity and better management decisions were THEIR GODDAMN JOB and not be so excited about being able to offload those dreary functions to AI’s which are guaranteed to be imbued with the biases of their creators or, even worse, unintended biases resulting from bad data or any of the untold messy parts of life that can’t be cleanly digitized.”

My friend hit the proverbial nail on the proverbial head – those “untold messy parts of life” are the things we have evolved to deal with, and the way we deal with them are not always admirable. But in the adaptive landscape we all came from, they were proven to work. We still carry that baggage with us. But is it right to transfer that baggage to algorithms in order to make them more human? Or should we be aiming for a blank slate?

We Don’t Need More Athletes and Models – We Do Need More People Who Understand Complexity

Have you seen the Verizon ad?

 

The one that starts with LeBron James walking towards the camera. He tells us “We don’t need more LeBrons” He’s followed in quick succession by other celebrities, including model Adriana Lima, quarterback Drew Brees and soccer star David Villa, all saying we don’t need more of their kind. The ad wraps up by saying what we do need is more people in science and technology to fill the 4 million jobs available. Verizon is pitching in by supporting education in STEM subjects (Science, Technology, Engineering and Math). The world, apparently, needs a lot more engineers.

Fair enough. The world runs on science and technology. But there’s an unintended consequence that comes with that. Technology is making the world a more complex place. And what we really need is more people that understand what complexity means.

By complexity, I don’t mean complicated. Those are two different things. I mean complexity in its classic sense – coming from the Latin “com” – meaning “together” – and “plex” – meaning “woven”. “Woven together” is a pretty good starting point for understanding complexity. It’s a concept that depends on connection, and we are more connected than ever before. Whether we like it or not, with connection comes complexity. And when we’re talking about complexity, we’re talking about a whole new ball game where all traditional bets are off.

There’s another funny thing about complexity. It’s nothing new. The world has always been complex. Biology has long been the domain of complex adaptive systems. This is true of all of the physical sciences. Benoit Mandelbrot found fractal complexity in leaves and the coastline of England. Quantum physics has always been around. It wasn’t invented at the beginning of the last century by Max Plank, Albert Einstein and Niels Bohr. It just took us most of our history as a species to discover it, hiding there beneath the deceptively simple rules of Isaac Newton. Complexity has always been part of nature. We’ve just been ignoring it for a long, long time, believing with all our hearts in a simpler, more comprehensible world.

Humans hate complexity, because complexity brings with it unpredictability and an inherent lack of control. It leads naturally into chaos. We much prefer models with foreseeable outcomes. We have been trying for many years to predict the weather, with very limited success. Why? Because weather is complex and often chaotic. And it’s getting more so, not less.

But the extreme weather we’re seeing more and more of is analogous to many parts of our world. Complexity is rearing its head in more and more places. It lies beneath everything. In the words of the Santa Fe Institute, the self-proclaimed world headquarters for complexity science — “(they) endeavor to understand and unify the underlying, shared patterns in complex physical, biological, social, cultural, technological, and even possible astrobiological worlds”

Which means complexity is everywhere. It impacts everything. And almost none of us understand it. But we’ve got to figure this stuff out, because the stakes are huge.

Let’s take something as important to us as democracy, for instance.

There is nothing especially complex about the idea of democracy. But the model of democracy is a different beast, because it relies on the foundation of our society, which is incredibly complex. Democracy is dependent on unwritten rules, which are in turn dependent on conventions and controls that have been inherent in our society. These are what have been called the “soft guardrails of democracy”. And they are being eroded by our newly connected complexity. A few weeks ago, some of America’s top political scientists got together at Yale University to talk about democracy and almost all of them agreed – democracy is in deep trouble. Yascha Mounk, from Harvard, summed up their collective thoughts succinctly: “If current trends continue for another 20 or 30 years, democracy will be toast.”

So complexity is something we should be learning about. But where to start? And when? Currently, if people do study complexity science, it’s generally at the post-grad level. And that’s just a handful of people, at a few universities. We need to start understanding complexity and it’s implications much sooner. It should be covered in grade school. But there’s no one to teach it, because the majority of teachers have no idea what I’m talking about. In a recent dissertation, a researcher from the University of Pennsylvania asked science teachers in a number of schools in Singapore if they were familiar with complexity. The findings were disheartening, “a large sample of ninety Grades 11 and 12 science teachers in six randomly- selected schools across Singapore revealed as many as 80% of the teachers reported that they did not have prior knowledge or heard of complex systems.” By the way, Singapore is consistently rated best in the world for science education. Here in North America, we trail by a significant margin. If this is a problem there, it’s a bigger problem here.

If you’re old enough to remember the movie the Graduate, there was a scene where “the Graduate” – played by Dustin Hoffman – was wandering around his parent’s cocktail party when he was cornered by a family friend; Mr McGuire. McGuire offered a word of career advice. Literally – one word:

“I just want to say one word to you – just one word. Are you listening? Plastics.”

That was 50 years ago. Today, my word is “complexity.”

Are you listening?

157 Shades of Grey…

Design is important. Thinking through how people will respond to the aesthetics of your product is an admirable thing. I remember once having the pleasure of sharing a stage with JetBlue’s VP of Marketing – Amy Curtis-McIntyre. She was explaining how important good design was to the airline’s overall marketing strategy. A tremendous amount of thought went into the aesthetics of all their printed materials – even those cards explaining the safety features of the airplane that none of us ever read. But on JetBlue, not only did passengers read them – they stole them because they were so cleverly designed. Was this a problem for management? Not according to Amy:

“You know you’re doing something right when people steal your marketing shit”

So, I’m a fan of good design. But according to a recent story on Fastcodesign.com, Google is going at least 156 shades too far. They seem obsessed with color – or – at least, testing for colors. The design team for Google’s new home assistant – the Mini – had to pick three different colors for the home appliance. They wanted one to make a personal statement and apparently that statement is best made by the color “Coral.” Then they needed a color that would sit unobtrusively next to your TV set and that turned out to be “Charcoal.” Finally, they needed a “floater” color that could go anywhere in the house, including the kitchen. And that’s when the design team at Google may have gone off the tracks. They tested 157 shades of grey – yes – 157 – before they settled on “Chalk,” which is said to be the most inoffensive shade imaginable. They even worked with a textile firm to create their own custom cloth for the grill on top.

That beats Google’s previous obsessive-compulsive testing disorder record, set by then VP of Search Marissa Mayer when she ordered the design team to test 42 different shades of blue for search links to see which got the most clicks. At Google, good design seems to equal endless testing. But is there anything wrong with that?

Well, for one thing, you can test yourself into a rabbit hole, running endless tests and drowning in reams of data looking for the optimal solution – completely missing global maxima while myopically focused on the local. Google tests everything – and I mean everything – truly, madly and deeply. Even Google insiders admit this penchant for testing often gets them focused on the trees rather than the forest. This is particularly true for design. Google has a long history of obsessively turning out ho-hum designs.

Personally, when it comes to pure design magic, I much prefer the Apple approach. Led by Steve Job and Jon Ive’s unerring sense for the aesthetic – it’s hard to think of a longer run of spectacular product designs. Yes, they too sweated the small stuff. But those details were always in service of a higher vision – an empathetic, elegantly simple, friendly approach to product design that somehow magically connected with the user, leaving that user somewhat awed and consistently impressed. One might quibble with the technology that lies inside the package, but no one has put together a more beautiful package that the Apple design team at the height of their powers.

When you look at a Google product, you have the result of endless testing and data crunching. When you look at a classic Apple design, you sense that this came from more than simple testing. This came from intuition and creativity.

 

Trust No One, Trust Nothing

In just one day last week looking at the headlines on MediaPost – two different articles mentioned a lack of trust – a lack of trust in contextual ad placement and a lack of trust in audience measurement data. But our industries trust issues go far deeper than just those two instances. Article after article cite an erosion of trust and the spreading of relational fault lines in every aspect of the industry.

The question of the day is “Where did the trust go? The follow up question then becomes, “What do we mean by trust?”

That is a difficult question. Trust is a word with many, many meanings. Over 20 years ago, University of Minnesota business professors D. Harrison McKnight and Norman L. Chervany wrote an extensive review to answer just that question. In it, across the many constructs of trust, they identified four dimensions: benevolence, integrity, competence and predictability. But not all these dimensions are required in all applications of trust.

First of all, there are two broad categories of trust: structural trust – trust in a system – and interpersonal trust – trust in a person. In their analysis, McKnight and Chervany looked at six constructs of trust that can apply in different situations. For the sake of this discussion, let’s focus on two of these:

“System trust: the extent to which one believes that proper impersonal structures are in place to enable one to anticipate a successful future endeavor.”

And…

“Situational Trust – meaning that one has decided to trust without regard to the specific persons involved, because the benefits of trusting in this situation outweigh the possible negative outcomes of trusting.”

What trust that did exist in marketing what an outcome of these two constructs. Both tend to apply to the structure of marketing, not the people in marketing. The headlines I cited earlier both pointed to a breakdown of trust on the system level, not the personal level. Now, let’s look at those four dimensions as they apply to structural trust in marketing. No one has ever accused marketers of being overly benevolent, so let’s set that one aside. Also, I would argue – strenuously – that marketers today – including those at agencies – are more competent than ever before. They have been mostly successful at turning marketing from an arcane guessing game that paraded as art to an empirically backed science. So a lack of competency can’t be blamed for this trust breakdown. That leaves integrity and predictability. I suspect there’s a compound relationship between these two things.

The reason we’re losing structural trust is that marketing is no longer predictable. And this lack of predictability is triggering a suspicion that there has been a corresponding lack of integrity. But the unpredictability of marketing is no one’s fault.

Marketing today is analogous to physics at the turn of the last century. For 200 years the universe had been neatly ruled by Newton’s Laws. Then physicists started discovering things that couldn’t be so neatly explained and the Universe became a place of Uncertainty Principles, Schrödinger’s Cat and Strange Attractors. Everything we thought was predictable in all situations suddenly become part of a much bigger – and more complex – mystery.

Similarly, mass marketing could run by Newton-like laws because we were dealing with mass and weren’t looking too closely. Apply enough force to enough people with enough frequency and you could move the needle in what seemed like a predictable fashion. But today marketing is a vastly different beast. We market one-to-one and those “ones” are all inter-connected, which creates all types of feedback loops and network effects. This creates complexity – so predictability is as dead at the afore-mentioned Schrödinger’s Cat (or is it?)

I don’t think this comes as news to anyone reading this column. We all know we’re being disrupted. I think we’re all beginning to understand the challenges of complexity. So why don’t we just accept it as the new normal and continue to work together? Why are clients feeling personally betrayed by their agencies, market research firms and ad delivery platforms? It’s because our brains aren’t very nuanced when it comes to identifying trust and betrayal. Brains operate by the “when you’re a hammer – everything looks like a nail” principle.

Rationally, we understand the different between interpersonal trust and situational trust, but we have to remember that our rationality is reinforced by emotional rewards and cautions. When we’re in a trusting relationship – or system – our ventrial striatum, medial prefrontal cortex and caudate nucleus all perk happily along, priming our brains with oxytocin and pushing all the right reward buttons. But whether it’s a person or a situation that betrays our trust, the same neural mechanisms fire – the insula and amygdala – creating feelings of frustration, fear, anger and resentment.

Now, none of this is the fault of anyone in marketing. But humans work on cause and effect. If our marketing is not working, it’s easier to assign a human cause. And it’s much easier to feel betrayed by a human than by a system.

 

Together We Lie

Humans are social animals. We’ve become this way because – evolutionarily speaking – we do better as a group than individually. But there’s a caveat here. If you get a group of usually honest people together, they’re more likely to lie. Why is this?

Martin Kocher and his colleagues from LMU in Munich set up a study where participants had to watch a video of a single roll of a die and then report on the number that came up. Depending on what they reported, there was a payoff. Researchers asked both individuals and small groups who had the opportunity to chat anonymously with each other before reporting. The result,

“Our findings are unequivocal: People are less likely to lie if they decide on their own.”

Even individuals who answered honestly independently started lying when they got in a group.

The researchers called this a “dishonesty shift.” They blame it on a shifting weight placed on the norm of honesty. Norms are those patterns we have that guide us in our behaviors and beliefs. But those norms may be different individually than they are when we’re part of a group.

“Feedback is the decisive factor. Group-based decision-making involves an exchange of views that may alter the relative weight assigned to the relevant norm.”

Let’s look at how this may play out. Individually, we may default to honesty. We do so because we’re unsure of the consequences of not being honest. But when we get in a group, we start talking to others and it’s easier to rationalize not being honest – “Well, if everyone’s going to lie, I might as well too.”

Why is this important? Because marketing is done in groups, by groups, to groups. The dynamics of group-based ethics are important for us to understand. It could help to explain the most egregious breaches of ethics we see becoming more and more commonplace, either in corporations or in governments.

Four of the seminal studies in psychology and sociology shed further light on why groups tend to shift towards dishonesty. Let’s look at them individually.

In 1955, Solomon Asch showed that even if individually we believe something to be incorrect, if enough people around us have a different option, we’ll go with the group consensus rather than risk being the odd person out. In his famous study, he would surround a subject with “plants” who, when shown cards with three black lines of obviously differing lengths on it, would insist that three lines were equal. The subjects were then asked their opinion. In 75% of the cases, they’d go with the group rather than risk disagreement. As Asch said in his paper – quoting sociologist Gabriel Tarde – “Social man in a somnambulist.” We have about as much independent will as your average sleepwalker.

Now, let’s continue with Stanley Milgram’s Obedience to Authority study, perhaps the most controversial and frightening of the group. When confronted with an authoritative demeanor, a white coat and a clipboard, 63% of the subjects meekly followed directions and delivered what were supposed to be lethal levels of electrical shock to a hapless individual. The results were so disheartening that we’ve been trying to debunk them ever since. But a follow up study by Stanford psychology professor Philip Zimbardo – where subjects were arbitrarily assigned roles as guards and inmates in a mock prison scenario – was even more shocking. We’re more likely to become monsters and abandon our personal ethics when we’re in a group than when we act alone. Whether it’s obedience to authority – as Milgram was trying to prove – or whether it’s social conformity taken to the extreme, we tend to do very bad things when we’re in bad company.

But how do we slip so far so quickly from our own personal ethical baseline? Here’s where the last study I’ll cite can shed a little light. Sociologist Mark Granovetter – famous for his Strength of Weak Ties study – also looked at the viral spreading of behaviors in groups. I’ve talked about this in a previous column, but here’s the short version: If we have the choice between two options, with accompanying social consequences, which option we choose may be driven by social conformity. If we see enough other people around us picking the more disruptive option (i.e. starting a riot) we may follow suit. Even if we all have different thresholds – which we do – the nature of a crowd is such that those with the lowest threshold will pick the disruption option, setting into effect a bandwagon effect that eventually tips the entire group over the threshold.

These were all studied in isolation, because that’s how science works. We study variables in isolation. But it’s when factors combine that we get the complexity that typifies the real world – and the real marketplace. And there’s where predictability goes out the window. The group dynamics in play can create behavioral patterns that make no sense to the average person with the average degree of morality. But it’s happened before, it’s happening now, and it’s sure to happen again.

 

 

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

To Buy or Not to Buy: The Touchy Subject of Mobile ECommerce

A recent report from Akamai indicates that users have little patience when it comes to making purchases on a mobile device. Here are just a few of the stats:

  • While almost half of all consumers browse via their phones, only 1 in 5 complete transactions on mobile
  • Optimal load times for peak conversions ranged from 1.8 to 2.7 seconds across device types
  • Just a 100-millisecond delay in load time hurt conversion rates by up to 7%
  • Bounce rates were highest among mobile shoppers and lowest among those using tablets

But there may be more behind this than just slow load times. We also have to consider what modes we’re in when we’re interacting with our mobile device.

In 2010, Microsoft did a fascinating research project that looked at how user behaviors varied from desktop to tablet to smart phone. The research was headed by Jacquelyn Krones, who was a Search Product Manager at the time. Search was the primary activity examined, but there was a larger behavioral context that was explored. While the study is 7 years old, I think the core findings are still relevant. The researchers found that we tend to have three large buckets of behaviors: missions, explorations and excavations. Missions were focused tasks that were usually looking for a specific piece of information – i.e. looking for an address or phone number. Explorations where more open ended and less focused on a given destination – i.e. seeing if there was any thing you wanted to do this Friday night. Excavations typically involved multiple tasks within an overarching master task – i.e. researching an article. In an interview with me, Krones outlined their findings:

“There’s clearly a different profile of these activities on the different platforms. On desktops and laptops, people do all three of the activities – they conduct missions and excavations and explorations.

“On their phones we expected to see lots of missions – usually when you use your mobile phone and you’re conducting a search, whatever you’re doing in terms of searching is less important than what’s going on with you in the real world – you’re trying to get somewhere, you’re having a discussion with somebody and you want to look something up quick or you’re trying to make a decision about where to go for dinner.

“But we were surprised to find that people are using their mobile phones for exploration. But once we saw the context, it made sense – people have a low tolerance for boredom. Their phone is actually pretty entertaining, much more entertaining than just looking at the head in front of you while you’re waiting in line. You can go check a sports score, read a story, or look at some viral video and have a more engaged experience.

“On tablets, we found that people are pretty much only using them for exploration today. I had expected to see more missions on tablets, and I think that that will happen in the future, but today people perceive their mobile phone as always with them, very personal, always on, and incredibly efficient for getting information when they’re in mission mode.”

Another study, coming out The University of British Columbia Okanagan, also saw a significant difference in behavioral modality when it came to interacting with touch screens. Assistant Professor Ying Zhu was the principal author:

“The playful and fun nature of the touchscreen enhances consumers’ favour of hedonic products; while the logical and functional nature of a desktop endorses the consumers’ preference for utilitarian products,” explains Zhu.

“Zhu’s study also found that participants using touchscreen technology scored significantly higher on experiential thinking than those using desktop computers. However, those on desktops scored significantly higher on rational thinking.”

I think what we have here is an example of thinking: fast and slow. I suspect we’re compartmentalizing our activities, subconsciously setting some aside for completion on the desktop. I would suspect utilitarian type purchasing would fall into this category. I know that’s certainly true in my case. As Dr. Zhu noted, we have a very right brain relationship with touchscreens, while desktops tend to bring out our left-brain. I have always been amazed at how our brains subconsciously prime us based on anticipating an operating environment. Chances are, we don’t even realize how much our behaviors change when we move from a smart phone to a tablet to a desktop. But I’d be willing to place a significant wager that it’s this subconscious techno-priming that’s causing some of these behavioural divides between devices.

Slow load times are never a good thing, on any device, but while they certainly don’t help with conversions, they may not be the only culprit sitting between a user and a purchase. The device itself could also be to blame.