Will We Ever Let Robots Shop for Us?

Several years ago, my family and I visited Astoria, Oregon. You’ll find it at the mouth of the Columbia River, where it empties into the Pacific. We happened to take a tour of Astoria and our guide pointed out a warehouse. He told us it was filled with canned salmon, waiting to be labeled and shipped. I asked what brand they were. His answer was “All of them. They all come from the same warehouse. The only thing different is the label.”

Ahh… the power of branding…

Labels can make a huge difference. If you need proof, look no further than the experimental introduction of generic brands in grocery stores. Well, they were generic to begin with, anyway. But over time, the generic “yellow label” was replaced with a plethora of store brands. The quality of what’s inside the box hasn’t changed much, but the packaging has. We do love our brands.

But there’s often no rational reason to do so. Take the aforementioned canned salmon for example. Same fish, no matter what label you may stick on it. Brands are a trick our brain plays on us. We may swear our favorite brand tastes better than it’s competitors, but it’s usually just our brain short circuiting our senses and our sensibility. Neuroscientist Read Montague found this out when he redid the classic Pepsi taste test using a fMRI scanner. The result? When Coke drinkers didn’t know what they were drinking, the majority preferred Pepsi. But the minute the brand was revealed, they again sweared allegiance to Coke. The taste hadn’t changed, but their brains had. As soon as the brain was aware of the brand, some parts of it suddenly started lighting up like a pinball machine.

In previous research we did, we found that the brain instantly responded to favored brains the same way it did to a picture of a friend or a smiling face. Our brains have an instantaneous and subconscious response to brands. And because of that, our brains shouldn’t be trusted with buying decisions. We’d be better off letting a robot do it for us.

And I’m not saying that facetiously.

A recent post on Bloomberg.com looked forward 20 years and predicted how automation would gradually take over ever step of the consumer product supply chain, from manufacturing to shipping to delivery to our door. The post predicts that the factory floor, the warehouse, ocean liners, trucks and delivery drones will all be powered by Artificial intelligence and robotic labor. The first set of human hands that might touch a product would be those of the buyer. But maybe we’re automating the wrong side of the consumer transaction. The thing human hands shouldn’t be touching is the buy button. We suck at it.

We have taken some steps in the right direction. Itamar Simonson and Emanuel Rosen predicted a death of branding in their book Absolute Value:

“In the past the marketing function “protected” the organization in some cases. When things like positioning, branding, or persuasion worked effectively, a mediocre company with a good marketing arm (and deep pockets for advertising) could get by. Now, as consumers are becoming less influenced by quality proxies, and as more consumers base their decisions on their likely experience with a product, this is changing.”

But our brand love dies hard. If our brain can literally rewire the evidence from our own senses – how can we possibly make rational buying decisions? True, as Simonson and Rosen point out, we do tend to favor objective information when it’s available, but at the end of the day, our buying decisions still rely on an instrument that has proven itself unreliable in making optimal decisions under the influence of brand messaging.

If we’re prepared to let robots steer ships, drive trucks and run factories, why won’t we let them shop for us? Existing shopping bots stop well short of actually making the purchase. We’ll put our lives in the hands of A.I. in a myriad of ways, but we won’t hand our credit card over. Why is that?

It seems ironic to me. If there were any area where machines can beat humans, it would be in making purchases. They’re much better at filtering based on objective criteria, they can stay on top of all prices everywhere and they can instantly aggregate data from all similar types of purchases. Most importantly, machines can’t be tricked by branding or marketing. They can complete the Absolute Value loop Simonson and Rosen talk about in their book.

Of course, there’s just one little problem with all that. It essentially ends the entire marketing and advertising industry.

Ooops.

Bias, Bug or Feature?

When we talk about artificial intelligence, I think of a real time Venn diagram in motion. One side is the sphere of all human activity. This circle is huge. The other side is the sphere of artificial intelligent activity. It’s growing exponentially. And the overlap area between the two is also expanding at the same rate. It’s this intersection between the two spheres that fascinates me. What are the rules that govern interplay between humans and machines?

Those rules necessarily depend on what the nature of the interplay is. For the sake of this column, let’s focus on those researchers and developers that are trying to make machines act more like humans. Take Jibo, for example. Jibo is “the first social robot for the home.” Jibo tells jokes, answers questions, understands nuanced language and recognizes your face. It’s just one more example of artificial intelligence that’s intended to be a human companion. And as we’re building machines that are more human, we’re finding is that many of the things we thought were human foibles are actually features that have developed for reasons that were at one time perfectly valid.

Trevor Paglin is a winner of the MacArthur Genius Grant. His latest project is to see what AI sees when it’s looking at us: “What are artificial intelligence systems actually seeing when they see the world?” What is interesting about this is that when machines see the world, they use machine-like reasoning to make sense of it. For example, Paglin fed hundreds of images of fellow artist Hito Steyerl into a face-analyzing algorithm. In one instance, she was evaluated as “74% female”.

This highlights a fundamental difference in how machines and humans see the world. Machines calculate probabilities. So do we, but that happens behind the scenes and it’s only part of how we understand the world. Operating a level higher than that we use meta-signatures; categorization for example – to quickly compartmentalize and understand the world. We would know immediately that Hito was a woman. We wouldn’t have to crunch the probabilities. By the way, we do the same thing with race.

But is this a feature or a bug? Paglin has his opinion, “I would argue that racism, for example, is a feature of machine learning—it’s not a bug,” he says. “That’s what you’re trying to do: you’re trying to differentiate between people based on metadata signatures and race is like the biggest metadata signature around. You’re not going to get that out of the system.”

Whether we like it or not, our inherent racism was a useful feature many thousands of years ago. It made us naturally wary of other tribes competing for the same natural resources. As much as it’s abhorrent to most of us now, it’s still a feature that we can’t “get out of the system.”

This highlights a danger in this overlap area between humans and machines. If we want machines to think as we do, we’re going to have to equip them with some of our biases. As I’ve mentioned before, there are some things that humans do well, or, at least; that we do better than machines. And there are things machines do infinitely better than we. Perhaps we shouldn’t to try to merge these two. If we’re trying to get machines to do what humans do, are we prepared to program racism, misogyny, intolerance, bias and greed into the operating system? All these things are part of being human, whether we like to admit it or not.

But there are other areas that are rapidly falling into the overlap zone of my imaginary Venn diagram. Take business strategy, for example. A recent study from CapGemini showed that 79% of organizations implementing AI feel it’s bringing new insights and better data analysis, 74% that it makes their organizations more creative and 71% feel it’s helping make better management decisions. A friend of mine recently brought this to my attention along with what was for him an uncharacteristic rant: “I really would’ve hoped senior executives might’ve thought creativity and better management decisions were THEIR GODDAMN JOB and not be so excited about being able to offload those dreary functions to AI’s which are guaranteed to be imbued with the biases of their creators or, even worse, unintended biases resulting from bad data or any of the untold messy parts of life that can’t be cleanly digitized.”

My friend hit the proverbial nail on the proverbial head – those “untold messy parts of life” are the things we have evolved to deal with, and the way we deal with them are not always admirable. But in the adaptive landscape we all came from, they were proven to work. We still carry that baggage with us. But is it right to transfer that baggage to algorithms in order to make them more human? Or should we be aiming for a blank slate?

We Don’t Need More Athletes and Models – We Do Need More People Who Understand Complexity

Have you seen the Verizon ad?

 

The one that starts with LeBron James walking towards the camera. He tells us “We don’t need more LeBrons” He’s followed in quick succession by other celebrities, including model Adriana Lima, quarterback Drew Brees and soccer star David Villa, all saying we don’t need more of their kind. The ad wraps up by saying what we do need is more people in science and technology to fill the 4 million jobs available. Verizon is pitching in by supporting education in STEM subjects (Science, Technology, Engineering and Math). The world, apparently, needs a lot more engineers.

Fair enough. The world runs on science and technology. But there’s an unintended consequence that comes with that. Technology is making the world a more complex place. And what we really need is more people that understand what complexity means.

By complexity, I don’t mean complicated. Those are two different things. I mean complexity in its classic sense – coming from the Latin “com” – meaning “together” – and “plex” – meaning “woven”. “Woven together” is a pretty good starting point for understanding complexity. It’s a concept that depends on connection, and we are more connected than ever before. Whether we like it or not, with connection comes complexity. And when we’re talking about complexity, we’re talking about a whole new ball game where all traditional bets are off.

There’s another funny thing about complexity. It’s nothing new. The world has always been complex. Biology has long been the domain of complex adaptive systems. This is true of all of the physical sciences. Benoit Mandelbrot found fractal complexity in leaves and the coastline of England. Quantum physics has always been around. It wasn’t invented at the beginning of the last century by Max Plank, Albert Einstein and Niels Bohr. It just took us most of our history as a species to discover it, hiding there beneath the deceptively simple rules of Isaac Newton. Complexity has always been part of nature. We’ve just been ignoring it for a long, long time, believing with all our hearts in a simpler, more comprehensible world.

Humans hate complexity, because complexity brings with it unpredictability and an inherent lack of control. It leads naturally into chaos. We much prefer models with foreseeable outcomes. We have been trying for many years to predict the weather, with very limited success. Why? Because weather is complex and often chaotic. And it’s getting more so, not less.

But the extreme weather we’re seeing more and more of is analogous to many parts of our world. Complexity is rearing its head in more and more places. It lies beneath everything. In the words of the Santa Fe Institute, the self-proclaimed world headquarters for complexity science — “(they) endeavor to understand and unify the underlying, shared patterns in complex physical, biological, social, cultural, technological, and even possible astrobiological worlds”

Which means complexity is everywhere. It impacts everything. And almost none of us understand it. But we’ve got to figure this stuff out, because the stakes are huge.

Let’s take something as important to us as democracy, for instance.

There is nothing especially complex about the idea of democracy. But the model of democracy is a different beast, because it relies on the foundation of our society, which is incredibly complex. Democracy is dependent on unwritten rules, which are in turn dependent on conventions and controls that have been inherent in our society. These are what have been called the “soft guardrails of democracy”. And they are being eroded by our newly connected complexity. A few weeks ago, some of America’s top political scientists got together at Yale University to talk about democracy and almost all of them agreed – democracy is in deep trouble. Yascha Mounk, from Harvard, summed up their collective thoughts succinctly: “If current trends continue for another 20 or 30 years, democracy will be toast.”

So complexity is something we should be learning about. But where to start? And when? Currently, if people do study complexity science, it’s generally at the post-grad level. And that’s just a handful of people, at a few universities. We need to start understanding complexity and it’s implications much sooner. It should be covered in grade school. But there’s no one to teach it, because the majority of teachers have no idea what I’m talking about. In a recent dissertation, a researcher from the University of Pennsylvania asked science teachers in a number of schools in Singapore if they were familiar with complexity. The findings were disheartening, “a large sample of ninety Grades 11 and 12 science teachers in six randomly- selected schools across Singapore revealed as many as 80% of the teachers reported that they did not have prior knowledge or heard of complex systems.” By the way, Singapore is consistently rated best in the world for science education. Here in North America, we trail by a significant margin. If this is a problem there, it’s a bigger problem here.

If you’re old enough to remember the movie the Graduate, there was a scene where “the Graduate” – played by Dustin Hoffman – was wandering around his parent’s cocktail party when he was cornered by a family friend; Mr McGuire. McGuire offered a word of career advice. Literally – one word:

“I just want to say one word to you – just one word. Are you listening? Plastics.”

That was 50 years ago. Today, my word is “complexity.”

Are you listening?

157 Shades of Grey…

Design is important. Thinking through how people will respond to the aesthetics of your product is an admirable thing. I remember once having the pleasure of sharing a stage with JetBlue’s VP of Marketing – Amy Curtis-McIntyre. She was explaining how important good design was to the airline’s overall marketing strategy. A tremendous amount of thought went into the aesthetics of all their printed materials – even those cards explaining the safety features of the airplane that none of us ever read. But on JetBlue, not only did passengers read them – they stole them because they were so cleverly designed. Was this a problem for management? Not according to Amy:

“You know you’re doing something right when people steal your marketing shit”

So, I’m a fan of good design. But according to a recent story on Fastcodesign.com, Google is going at least 156 shades too far. They seem obsessed with color – or – at least, testing for colors. The design team for Google’s new home assistant – the Mini – had to pick three different colors for the home appliance. They wanted one to make a personal statement and apparently that statement is best made by the color “Coral.” Then they needed a color that would sit unobtrusively next to your TV set and that turned out to be “Charcoal.” Finally, they needed a “floater” color that could go anywhere in the house, including the kitchen. And that’s when the design team at Google may have gone off the tracks. They tested 157 shades of grey – yes – 157 – before they settled on “Chalk,” which is said to be the most inoffensive shade imaginable. They even worked with a textile firm to create their own custom cloth for the grill on top.

That beats Google’s previous obsessive-compulsive testing disorder record, set by then VP of Search Marissa Mayer when she ordered the design team to test 42 different shades of blue for search links to see which got the most clicks. At Google, good design seems to equal endless testing. But is there anything wrong with that?

Well, for one thing, you can test yourself into a rabbit hole, running endless tests and drowning in reams of data looking for the optimal solution – completely missing global maxima while myopically focused on the local. Google tests everything – and I mean everything – truly, madly and deeply. Even Google insiders admit this penchant for testing often gets them focused on the trees rather than the forest. This is particularly true for design. Google has a long history of obsessively turning out ho-hum designs.

Personally, when it comes to pure design magic, I much prefer the Apple approach. Led by Steve Job and Jon Ive’s unerring sense for the aesthetic – it’s hard to think of a longer run of spectacular product designs. Yes, they too sweated the small stuff. But those details were always in service of a higher vision – an empathetic, elegantly simple, friendly approach to product design that somehow magically connected with the user, leaving that user somewhat awed and consistently impressed. One might quibble with the technology that lies inside the package, but no one has put together a more beautiful package that the Apple design team at the height of their powers.

When you look at a Google product, you have the result of endless testing and data crunching. When you look at a classic Apple design, you sense that this came from more than simple testing. This came from intuition and creativity.

 

Trust No One, Trust Nothing

In just one day last week looking at the headlines on MediaPost – two different articles mentioned a lack of trust – a lack of trust in contextual ad placement and a lack of trust in audience measurement data. But our industries trust issues go far deeper than just those two instances. Article after article cite an erosion of trust and the spreading of relational fault lines in every aspect of the industry.

The question of the day is “Where did the trust go? The follow up question then becomes, “What do we mean by trust?”

That is a difficult question. Trust is a word with many, many meanings. Over 20 years ago, University of Minnesota business professors D. Harrison McKnight and Norman L. Chervany wrote an extensive review to answer just that question. In it, across the many constructs of trust, they identified four dimensions: benevolence, integrity, competence and predictability. But not all these dimensions are required in all applications of trust.

First of all, there are two broad categories of trust: structural trust – trust in a system – and interpersonal trust – trust in a person. In their analysis, McKnight and Chervany looked at six constructs of trust that can apply in different situations. For the sake of this discussion, let’s focus on two of these:

“System trust: the extent to which one believes that proper impersonal structures are in place to enable one to anticipate a successful future endeavor.”

And…

“Situational Trust – meaning that one has decided to trust without regard to the specific persons involved, because the benefits of trusting in this situation outweigh the possible negative outcomes of trusting.”

What trust that did exist in marketing what an outcome of these two constructs. Both tend to apply to the structure of marketing, not the people in marketing. The headlines I cited earlier both pointed to a breakdown of trust on the system level, not the personal level. Now, let’s look at those four dimensions as they apply to structural trust in marketing. No one has ever accused marketers of being overly benevolent, so let’s set that one aside. Also, I would argue – strenuously – that marketers today – including those at agencies – are more competent than ever before. They have been mostly successful at turning marketing from an arcane guessing game that paraded as art to an empirically backed science. So a lack of competency can’t be blamed for this trust breakdown. That leaves integrity and predictability. I suspect there’s a compound relationship between these two things.

The reason we’re losing structural trust is that marketing is no longer predictable. And this lack of predictability is triggering a suspicion that there has been a corresponding lack of integrity. But the unpredictability of marketing is no one’s fault.

Marketing today is analogous to physics at the turn of the last century. For 200 years the universe had been neatly ruled by Newton’s Laws. Then physicists started discovering things that couldn’t be so neatly explained and the Universe became a place of Uncertainty Principles, Schrödinger’s Cat and Strange Attractors. Everything we thought was predictable in all situations suddenly become part of a much bigger – and more complex – mystery.

Similarly, mass marketing could run by Newton-like laws because we were dealing with mass and weren’t looking too closely. Apply enough force to enough people with enough frequency and you could move the needle in what seemed like a predictable fashion. But today marketing is a vastly different beast. We market one-to-one and those “ones” are all inter-connected, which creates all types of feedback loops and network effects. This creates complexity – so predictability is as dead at the afore-mentioned Schrödinger’s Cat (or is it?)

I don’t think this comes as news to anyone reading this column. We all know we’re being disrupted. I think we’re all beginning to understand the challenges of complexity. So why don’t we just accept it as the new normal and continue to work together? Why are clients feeling personally betrayed by their agencies, market research firms and ad delivery platforms? It’s because our brains aren’t very nuanced when it comes to identifying trust and betrayal. Brains operate by the “when you’re a hammer – everything looks like a nail” principle.

Rationally, we understand the different between interpersonal trust and situational trust, but we have to remember that our rationality is reinforced by emotional rewards and cautions. When we’re in a trusting relationship – or system – our ventrial striatum, medial prefrontal cortex and caudate nucleus all perk happily along, priming our brains with oxytocin and pushing all the right reward buttons. But whether it’s a person or a situation that betrays our trust, the same neural mechanisms fire – the insula and amygdala – creating feelings of frustration, fear, anger and resentment.

Now, none of this is the fault of anyone in marketing. But humans work on cause and effect. If our marketing is not working, it’s easier to assign a human cause. And it’s much easier to feel betrayed by a human than by a system.

 

Together We Lie

Humans are social animals. We’ve become this way because – evolutionarily speaking – we do better as a group than individually. But there’s a caveat here. If you get a group of usually honest people together, they’re more likely to lie. Why is this?

Martin Kocher and his colleagues from LMU in Munich set up a study where participants had to watch a video of a single roll of a die and then report on the number that came up. Depending on what they reported, there was a payoff. Researchers asked both individuals and small groups who had the opportunity to chat anonymously with each other before reporting. The result,

“Our findings are unequivocal: People are less likely to lie if they decide on their own.”

Even individuals who answered honestly independently started lying when they got in a group.

The researchers called this a “dishonesty shift.” They blame it on a shifting weight placed on the norm of honesty. Norms are those patterns we have that guide us in our behaviors and beliefs. But those norms may be different individually than they are when we’re part of a group.

“Feedback is the decisive factor. Group-based decision-making involves an exchange of views that may alter the relative weight assigned to the relevant norm.”

Let’s look at how this may play out. Individually, we may default to honesty. We do so because we’re unsure of the consequences of not being honest. But when we get in a group, we start talking to others and it’s easier to rationalize not being honest – “Well, if everyone’s going to lie, I might as well too.”

Why is this important? Because marketing is done in groups, by groups, to groups. The dynamics of group-based ethics are important for us to understand. It could help to explain the most egregious breaches of ethics we see becoming more and more commonplace, either in corporations or in governments.

Four of the seminal studies in psychology and sociology shed further light on why groups tend to shift towards dishonesty. Let’s look at them individually.

In 1955, Solomon Asch showed that even if individually we believe something to be incorrect, if enough people around us have a different option, we’ll go with the group consensus rather than risk being the odd person out. In his famous study, he would surround a subject with “plants” who, when shown cards with three black lines of obviously differing lengths on it, would insist that three lines were equal. The subjects were then asked their opinion. In 75% of the cases, they’d go with the group rather than risk disagreement. As Asch said in his paper – quoting sociologist Gabriel Tarde – “Social man in a somnambulist.” We have about as much independent will as your average sleepwalker.

Now, let’s continue with Stanley Milgram’s Obedience to Authority study, perhaps the most controversial and frightening of the group. When confronted with an authoritative demeanor, a white coat and a clipboard, 63% of the subjects meekly followed directions and delivered what were supposed to be lethal levels of electrical shock to a hapless individual. The results were so disheartening that we’ve been trying to debunk them ever since. But a follow up study by Stanford psychology professor Philip Zimbardo – where subjects were arbitrarily assigned roles as guards and inmates in a mock prison scenario – was even more shocking. We’re more likely to become monsters and abandon our personal ethics when we’re in a group than when we act alone. Whether it’s obedience to authority – as Milgram was trying to prove – or whether it’s social conformity taken to the extreme, we tend to do very bad things when we’re in bad company.

But how do we slip so far so quickly from our own personal ethical baseline? Here’s where the last study I’ll cite can shed a little light. Sociologist Mark Granovetter – famous for his Strength of Weak Ties study – also looked at the viral spreading of behaviors in groups. I’ve talked about this in a previous column, but here’s the short version: If we have the choice between two options, with accompanying social consequences, which option we choose may be driven by social conformity. If we see enough other people around us picking the more disruptive option (i.e. starting a riot) we may follow suit. Even if we all have different thresholds – which we do – the nature of a crowd is such that those with the lowest threshold will pick the disruption option, setting into effect a bandwagon effect that eventually tips the entire group over the threshold.

These were all studied in isolation, because that’s how science works. We study variables in isolation. But it’s when factors combine that we get the complexity that typifies the real world – and the real marketplace. And there’s where predictability goes out the window. The group dynamics in play can create behavioral patterns that make no sense to the average person with the average degree of morality. But it’s happened before, it’s happening now, and it’s sure to happen again.

 

 

I, Robot….

Note: No Artificial Intelligence was involved in the creation of this column.

In the year 1942, science fiction writer Isaac Asimov introduced the 3 Rules of Robotics in his collection of short stories, I, Robot..

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Asimov had the rules as coming from the Handbook of Robotics, 56th Edition, 2058 A.D. What was once an unimaginably distant time in the future is now knocking with increasing intensity on the door of the present. And Elon Musk, for one, is worried. “AI is a fundamental risk to the existence of human civilization.” Musk believes, Rules of Robotics or no, we won’t be able to control this genie once it gets out of its bottle.

Right now, the genie looks pretty benign. In the past year, the Washington Post has used robot reporters to write over 850 stories. The Post believes this is a win/win with their human reporters, because the robot, named Heliograf, can:

  • Cover stories that wouldn’t have been covered due to lack of human resources
  • Do the factual heavy lifting for human reporters
  • Alert humans to possible news stories in big data sets

So, should we fear or cheer robots? I think the Post’s experiment highlights two areas that AI excels at, and indicates how we might play nice with machines.

For AI to work effectively, the dots have to be pretty well sketched out. When they are, AI can be tireless in scouting out relevant facts and data where humans would tend to get bored easily. But humans are still much better at connecting those dots, especially when no obvious connection is apparent. We do it through something called intuition. It’s at least one area where we can still blow machines away.

Machines are also good at detecting patterns in overwhelming amounts of data. Humans tend to overfit…make the data fit our narratives. We’ll come back to this point in a minute, but for now, let’s go back to intuition. It’s still the trump card we humans hold. In 2008, Wired editor Chris Anderson prematurely (and, many believe, incorrectly) declared the Scientific Method dead, thanks to the massive data sets we now have available:

“We can analyze the data without hypotheses about what it might show. We can throw the numbers into the biggest computing clusters the world has ever seen and let statistical algorithms find patterns where science cannot.”

Anderson gets it partly right, but he also unfairly gives intuition short shrift. This is not a zero sum game. Intuition and A.I. can and should play nicely together. As I mentioned a few weeks ago, human intuition was found to boost the effectiveness of an optimization algorithm by 25%.

Evolutionary biologist Richard Dawkins recently came to the defense of intuition in Science, saying:

“Science proceeds by intuitive leaps of the imagination – building an idea of what might be true, and then testing it”

The very human problem comes when we let our imaginations run away from the facts, bending science to fit our hypotheses:

“It is important that scientists should not be so wedded to that intuition that they omit the very important testing stage.”

There is a kind of reciprocation here – an oscillation between phases. Humans are great at some stages – the ones that require intuition and imagination -and machines are better at others – where a cold and dispassionate analysis of the facts is required. Like most things in nature that pulse with a natural rhythm, the whole gains from the opposing forces at work here. It is a symphony with a beat and a counterbeat.

That’s why, for the immediate future anyway, machines should bend not to our will, but to our imagination.

Addicted to Tech

A few columns ago, I mentioned one of the aspects that is troubling me about technology – the shallowness of social media. I had mentioned at the time that there were other aspects that were equally troubling. Here’s one:

Technology is addictive – and it’s addictive by design.

Let’s begin by looking at the definition of addiction:

Persistent compulsive use of a substance known by the user to be harmful

So, let’s break it down. I don’t think you can quibble with the persistent, compulsive use part. When’s the last time you had your iPhone in your hand? We can simply swap out “substance” for “device” or “technology” So that leaves with the last qualifier “known by the user to be harmful” – and there’s two parts to this – is it harmful and does the user know it’s harmful?

First, let’s look at the neurobiology of addiction. What causes us to use something persistently and compulsively? Here, dopamine is the culprit. Our reward center uses dopamine and the pleasurable sensation it produces as a positive reinforcement to cause us to pursue activities which over many hundreds of generations have proven to be evolutionarily advantageous. But Dr. Gary Small, from the UCLA Brain Research Institute, warns us that this time could be different:

“The same neural pathways in the brain that reinforce dependence on substances can reinforce compulsive technology behaviors that are just as addictive and potentially destructive.”

We like to think of big tobacco as the most evil of all evil empires – guilty of promoting addiction to a harmful substance – but is there a lot separating them from the purveyors of tech – Facebook or Google, for instance? According to Tristan Harris, there may be a very slippery slope between the two. I’ve written about Tristan before. He’s the former Google Product Manager who’s launched the Time Well Spent non-profit, devoted to stopping “tech companies from hijacking our minds.” Harris points the finger squarely at the big Internet platforms for creating platforms that are intentionally designed to suck as much of our time as possible. There’s empirical evidence to back up Harris’s accusations. Researchers at Michigan State University and from two universities in the Netherlands found that even seeing the Facebook logo can trigger a conditioned response in a social media user that starts the dopamine cycle spinning. We start jonesing for a social media fix.

So, what if our smart phones and social media platforms seduce us into using them compulsively? What’s the harm, as long as it’s not hurting us? That’s the second part of the addiction equation – is whatever we’re using harmful? After all, it’s not like tobacco, where it was proven to cause lung cancer.

Ah, but that’s the thing, isn’t it? We were smoking cigarettes for almost a hundred years before we finally found out they were bad for us. Sometimes it takes awhile for the harmful effects of addiction to appear. The same could be true for our tech habit.

Tech addiction plays out at many different levels of cognition. This could potentially be much more sinister than just the simple waste of time that Tristan Harris is worried about. There’s mounting evidence that overuse of tech could dramatically alter our ability to socialize effectively with other humans. The debate, which I’ve talked about before, comes when we substitute screen-to-screen interaction for face-to-face. The supporters say that this is simply another type of social bonding – one that comes with additional benefits. The naysayers worry that we’re just not built to communicate through screen and that – sooner or later – there will be a price to be paid for our obsessive use of digital platforms.

Dr. Jean Twenge, professor of psychology at San Diego State University, researches generational differences in behavior. It’s here where the full impact of the introduction of a disruptive environmental factor can be found. She found a seismic shift in behaviors between Millennials and the generation that followed them. It was a profound difference in how these generations viewed the world and where they spent their time. And it started in 2012 – the year when the proportion of Americans who owned a smartphone surpassed 50 percent. She sums up her concern in unequivocal terms:

“The twin rise of the smartphone and social media has caused an earthquake of a magnitude we’ve not seen in a very long time, if ever. There is compelling evidence that the devices we’ve placed in young people’s hands are having profound effects on their lives—and making them seriously unhappy.”

Not only are we less happy, we may be becoming less smart. As we become more reliant on technology, we do something called cognitive off-loading. We rely on Google rather than our memories to retrieve facts. We trust our GPS more than our own wayfinding strategies to get us home. Cognitive off loading is a way to move beyond the limits of our own minds, but there may an unacceptable trade off here. Brains are like muscles – if we stop using them they begin to atrophy.

Let’s go back to that original definition and the three qualifying criteria:

  • Persistent, compulsive use
  • Harmful
  • We know it’s harmful

In the case of tech, let’s not wait a hundred years to put check marks after all of these.

 

 

To Buy or Not to Buy: The Touchy Subject of Mobile ECommerce

A recent report from Akamai indicates that users have little patience when it comes to making purchases on a mobile device. Here are just a few of the stats:

  • While almost half of all consumers browse via their phones, only 1 in 5 complete transactions on mobile
  • Optimal load times for peak conversions ranged from 1.8 to 2.7 seconds across device types
  • Just a 100-millisecond delay in load time hurt conversion rates by up to 7%
  • Bounce rates were highest among mobile shoppers and lowest among those using tablets

But there may be more behind this than just slow load times. We also have to consider what modes we’re in when we’re interacting with our mobile device.

In 2010, Microsoft did a fascinating research project that looked at how user behaviors varied from desktop to tablet to smart phone. The research was headed by Jacquelyn Krones, who was a Search Product Manager at the time. Search was the primary activity examined, but there was a larger behavioral context that was explored. While the study is 7 years old, I think the core findings are still relevant. The researchers found that we tend to have three large buckets of behaviors: missions, explorations and excavations. Missions were focused tasks that were usually looking for a specific piece of information – i.e. looking for an address or phone number. Explorations where more open ended and less focused on a given destination – i.e. seeing if there was any thing you wanted to do this Friday night. Excavations typically involved multiple tasks within an overarching master task – i.e. researching an article. In an interview with me, Krones outlined their findings:

“There’s clearly a different profile of these activities on the different platforms. On desktops and laptops, people do all three of the activities – they conduct missions and excavations and explorations.

“On their phones we expected to see lots of missions – usually when you use your mobile phone and you’re conducting a search, whatever you’re doing in terms of searching is less important than what’s going on with you in the real world – you’re trying to get somewhere, you’re having a discussion with somebody and you want to look something up quick or you’re trying to make a decision about where to go for dinner.

“But we were surprised to find that people are using their mobile phones for exploration. But once we saw the context, it made sense – people have a low tolerance for boredom. Their phone is actually pretty entertaining, much more entertaining than just looking at the head in front of you while you’re waiting in line. You can go check a sports score, read a story, or look at some viral video and have a more engaged experience.

“On tablets, we found that people are pretty much only using them for exploration today. I had expected to see more missions on tablets, and I think that that will happen in the future, but today people perceive their mobile phone as always with them, very personal, always on, and incredibly efficient for getting information when they’re in mission mode.”

Another study, coming out The University of British Columbia Okanagan, also saw a significant difference in behavioral modality when it came to interacting with touch screens. Assistant Professor Ying Zhu was the principal author:

“The playful and fun nature of the touchscreen enhances consumers’ favour of hedonic products; while the logical and functional nature of a desktop endorses the consumers’ preference for utilitarian products,” explains Zhu.

“Zhu’s study also found that participants using touchscreen technology scored significantly higher on experiential thinking than those using desktop computers. However, those on desktops scored significantly higher on rational thinking.”

I think what we have here is an example of thinking: fast and slow. I suspect we’re compartmentalizing our activities, subconsciously setting some aside for completion on the desktop. I would suspect utilitarian type purchasing would fall into this category. I know that’s certainly true in my case. As Dr. Zhu noted, we have a very right brain relationship with touchscreens, while desktops tend to bring out our left-brain. I have always been amazed at how our brains subconsciously prime us based on anticipating an operating environment. Chances are, we don’t even realize how much our behaviors change when we move from a smart phone to a tablet to a desktop. But I’d be willing to place a significant wager that it’s this subconscious techno-priming that’s causing some of these behavioural divides between devices.

Slow load times are never a good thing, on any device, but while they certainly don’t help with conversions, they may not be the only culprit sitting between a user and a purchase. The device itself could also be to blame.

The Assisted Reality of the New Marketer

Last week, MediaPost’s Laurie Sullivan warned us that the future of analytical number crunchers is not particularly rosy in the world of marketing. With cognitive technologies like IBM’s Watson coming on strong in more and more places, analytic skills are not that hot a commodity any more. Ironically, when it comes to marketing, the majority of companies have not planned to incorporate cognitive technologies in the near future. According to a report from IBM and Oxford Economics, only 24% of the organizations have a plan to incorporate CT in their own operations.

Another study, from Forrester, explored AI Marketing Readiness in Retail and eCommerce sectors. The state of readiness is a little better. In these typically forward thinking sectors, 72% are implementing AI marketing tech in the next year, but only 45% of those companies would consider themselves as excelling in at least 2 out of 3 dimensions of readiness.

If those numbers seem contradictory, we should understand what the difference between cognitive technology and artificial intelligence is. You’ll notice that IBM refers to Watson as “cognitive computing.” As Rob High, IBM’s CTO for Watson put it, “What it’s really about is involvement of a human in the loop,” and he described Watson as “augmented intelligence” rather than artificial intelligence.

That “human in the loop” is a critical difference between the two technologies. Whether we like it or not, machines are inevitable in the world of marketing, so we’d better start thinking about how to play nice with them.

 

I remember first seeing a video from the IBM Amplify summit at a MediaPost event last year. Although the presentation was a little stilted, the promise was intriguing. It showed a marketer musing about a potential campaign and throwing “what ifs” at Watson, who quickly responded with the almost instantly analyzed quantified answers. The premise of the video was to show how smart Watson was. But here’s a “what if” to consider. What if the real key to this was the hypotheticals that the human seemed to be pulling out of the blue? That doesn’t seem that impressive to us – certainly not as impressive as Watson’s corralling and crunching of relevant numbers in the blink of an eye. Musing is what we do. But this is just one example of something called Moravec’s Paradox.

Moravec’s Paradox, as stated by AI pioneer Marvin Minsky, is this: “In general, we’re least aware of what our minds do best. We’re more aware of simple processes that don’t work well than of complex ones that work flawlessly.” In other words, what we find difficult are the tasks that machines are well suited for, and the things we’re not even aware of are the things machines find notoriously hard to do. Things like intuition. And empathy. If we’re looking at the future of the human marketer, we’re probably looking at those two things.

In his book, Humans are Underrated, Geoff Colvin writes,

“Rather than ask what computers can’t do, it’s much more useful to ask what people are compelled to do—those things that a million years of evolution cause us to value and seek from other humans, maybe for a good reason, maybe for no reason, but it’s the way we are.”

We should be ensuring that both humans and machines are doing what they do best, essentially erasing Moravec’s Paradox. Humans focus on intuition and empathy and machines do the heavy lifting on the analyzing and number crunching. The optimal balance – at this point anyway – is a little bit of both.

In Descarte’s Error – neurologist Antonio Damasio showed that without human intuition and emotion – together with the corresponding physical cues he called somatic markers – we could rationalize ourselves into a never-ending spiral without ever coming to a conclusion. We need to be human to function effectively.

Researchers at MIT have even tried to include this into an algorithm. In 1954, Herbert Simon introduced a concept called bounded rationality. It may seem like this puts limits on the cognitive power of humans, but as programmers like to say, bounded rationality is a feature, not a bug. The researchers at MIT found that in an optimization challenge, such as finding the optimal routing strategy for an airline, humans have the advantage of being able to impose some intuitive limits on the number of options considered. For example, a human can say, “Planes should visit each city at the most once,” and thereby dramatically limit the number crunching required. When these intuitive strategies were converted to machine language and introduced into automated algorithms, those algorithms got 10 to 15% smarter.

When it comes right down to it, the essence of marketing is simply a conversation between two people. All the rest: the targeting, the automation, the segmentation, the media strategy – this is all just to add “mass” to marketing. And that’s all the stuff that machines are great at. For us humans, our future seems to rely on our past – and on our ability to connect with other humans.