Curmudgeon, Chicken Little or Cognoscenti?

Apparently I’m old and out of step. Curmudgeonly, even. And this is from people of my own generation. My previous column about the potential shallowness encouraged by social media drew a few comments that indicated I was just being a grumpy old man. One was from an old industry friend – Brett Tabke:

“The rest of the article is like out of the 70’s in that it is devoid of the reality that is the uber-me generation. The selfie is only a reflection of their inward focus.”

The other was from Monica Emrich, whom I’ve never had the pleasure of meeting:

” ’Social Media Is Barely Skin-Deep.’ ho hum. History shows: when new medium hits, civilization as we know it is over.”

These comments seem to telling me, “Relax. You just don’t understand because you’re too old. Everything will be great.” And, if that’s true, I’d be okay with that. I’m more than willing to be proven a doddering old fool if it means technology is ushering us into a new era of human greatness.

But what if this time is different? What if Monica’s facetious comment actually nailed it? Maybe civilization as we know it will be over. The important part of this is “as we know it.” Every technological disruption unleashes a wave of creative destruction that pushes civilization in a new direction. We seem to blindly assume it will always go in the right direction. And it is true that technology has generally elevated the human race. But not uniformly – and not consistently. What if this shift is different? What if we become less than what we were? It can happen. Brexit – Xenophobia – Trump – Populism, all these things are surfing on the tides of new technology.

Here’s the problem. There are some aspects of technology that we’ve never had to deal with before – at least, not at this scale. One these aspects (other aspects will no doubt be the topic of a future Media Insider) is that technology is now immersive and ubiquitous. It creates an alternate reality for us, and it has done in it in a few short decades. Why is this dangerous? It’s dangerous because evolution has not equipped us to deal with this new reality. In the past, when there has been a shift in our physical reality, it has taken place over several generations. Natural selection had the time to reshape the human genome to survive and eventually thrive in this new reality. Along the way, we acquired checks and balances that would allow us to deal with the potentially negative impacts of the environment.

But our new reality is different. It’s happen in the space of a single generation. There is no way we could have acquired natural defenses against it. We are operating in an environment we have been untested for. The consequences are yet to be discovered.

No, your response might be to say that, “Yes, evolution doesn’t move this quickly, but out brains can. They are elastic and malleable.” This is true, but there’s a big “but” that lies hidden in this approach. Our brains rewire to be a better match their environment. This is one of the things that humans excel at. But this rewiring happens on top of a primitive platform with some built in limitations. The assumption is that a better match with our environment provides a better chance for survival of the species.

But what if technology is throwing us a curve ball in this case? No matter what the environment we have adapted to, there has been one constant: The history of humans depends on our success in living together. We have evolved to be social animals but that evolution is predicated on the assumption that our socializing would take place face-to-face. Technology is artificially decoupling our social interactions from the very definition of society that we have evolved to be able to handle. A recent Wharton interview with Eden Collinsworth sounds the same alarm bells.

“The frontal lobes, which are the part of the brain that puts things in perspective and allows you to be empathetic, are constantly evolving. But it is less likely to evolve and develop those skills if you are in front of a screen. In other words, those skills come into play when you have a face-to-face interaction with someone. You can observe facial gestures. You can hear the intonation of a voice. You’re more likely to behave moderately in that exchange, unless it’s a just a knock-down, drag-out fight.”

Collinsworth’s premise – which is covered in her new book, Behaving Badly – is that this artificial reality is changing our concepts of morality and ethics. She reminds us the two are interlinked, but they are not the same thing. Morality is our own personal code of conduct. Ethics are a shared code that society depends on to instill a general sense of fairness. Collinsworth believes both are largely learned from the context of our culture. And she worries that a culture that is decoupled from the physical reality we have evolved to operate in may have dire consequences.

The fact is that if our morality and ethics are intended to keep us socially more cohesive, this works best in a face-to-face context. In an extreme example of this, Lt. Col. Dave Grossman, a former paratrooper and professor of psychology at West Point, showed how our resistance to killing another human in combat is inversely related to our physical distance from them. The closer we are to them, the more resistant we are to the idea of killing them. This makes sense in an evolutionary environment where all combat was hand-to-hand. But today, the killer could be in a drone flight control center thousands of miles from his or her intended target.

This evolved constraint on unethical behavior – the social check and balance of being physically close to the people we’re engaging with – is important. And while the application of the two examples I’ve cited; One – the self-absorbed behavior on social networks – and Two – the moral landscape of a drone strike operator, may seem magnitudes apart in terms of culpability, the underlying neural machinery is related. What we believe is right and wrong is determined by a moral compass set to the bearings of our environment. The fundamental workings of that compass assumed we would be face-to-face with the people we have to deal with. But thanks to technology, that’s no longer the case.

Maybe Brett and Monica are right. Maybe I’m just being alarmist. But if not, we’d better start paying more attention. Because civilization “as we know it” may be ending.

 

Our Brain on Reviews

There’s an interesting new study that was just published about how our brain mathematically handles online reviews that I wanted to talk about today. But before I get to that, I wanted to talk about foraging a bit.

The story of how science discovered our foraging behaviors serves as a mini lesson in how humans tick. The economists of the 1940’s and 50’s discovered the world of micro-economics, based on the foundation that humans were perfectly rational – we were homo economicus. When making personal economic choices in a world of limited resources, we maximized utility. The economists of the time assumed this was a uniquely human property, bequeathed on us by virtue of the reasoning power of our superior brains.

In the 60’s, behavior ecologists knocked our egos down a peg or two. It wasn’t just humans that could do this. Foxes could do it. Starlings could do it. Pretty much any species had the same ability to seemingly make optimal choices when faced with scarcity. It was how animals kept from starving to death. This was the birth of foraging theory. This wasn’t some homo-sapien-exclusive behavior that was directed from the heights of rationality downwards. It was an evolved behavior that was built from the ground up. It’s just that humans had learned how to apply it to our abstract notion of economic utility.

Three decades later, two researchers at Xerox’s Palo Alto Research Center found another twist. Not only had our ability to forage been evolved all the way through our extensive family tree, but we seemed to borrow this strategy and apply it to entirely new situations. Peter Pirolli and Stuart Card found that when humans navigate content in online environments, the exact same patterns could be found. We foraged for information. Those same calculations determined whether we would stay in an information “patch” or move on to more promising territory.

This seemed to indicate three surprising discoveries about our behavior:

  • Much of what we think is rational behavior is actually driven by instincts that have evolved over millions of years
  • We borrow strategies from one context and apply them in another. We use the same basic instincts to find the FAQ section of a website that we used to find sustenance on the savannah.
  • Our brains seem to use Bayesian logic to continuously calculate and update a model of the world. We rely on this model to survive in our environment, whatever and wherever that environment might be.

So that brings us to the study I mentioned at the beginning of this column. If we take the above into consideration, it should come as no surprise that our brain uses similar evolutionary strategies to process things like online reviews. But the way it does it is fascinating.

The amazing thing about the brain is how it seamlessly integrates and subconsciously synthesizes information and activity from different regions. For example, in foraging, the brain integrates information from the regions responsible for wayfinding – knowing our place in the world – with signals from the dorsal anterior cingulate cortex – an area responsible for reward monitoring and executive control. Essentially, the brain is constantly updating an algorithm about whether the effort required to travel to a new “patch” will be balanced by the reward we’ll find when we get there. You don’t consciously marshal the cognitive resources required to do this. The brain does it automatically. What’s more – the brain uses many of the same resources and algorithm whether we’re considering going to McDonald’s for a large order of fries or deciding what online destination would be the best bet for researching our upcoming trip to Portugal.

In evaluating online reviews, we have a different challenge: how reliable are the reviews? The context may be new – our ancestors didn’t have TripAdvisor or AirBNB ratings for choosing the right cave to sleep in tonight – but the problem isn’t. What criteria should we use when we decide to integrate social information into our decision making process? If Thorlak the bear hunter tells me there’s a great cave a half-day’s march to the south, should I trust him? Experience has taught us a few handy rules of thumb when evaluating sources of social information: reliability of the source and the consensus of crowds. Has Thorlak ever lied to us before? Do others in the tribe agree with him? These are hardwired social heuristics. We apply them instantly and instinctively to new sources of information that come from our social network. We’ve been doing it for thousands of years. So it should come as no surprise that we borrow these strategies when dealing with online reviews.

In a neuro-scanning study from the University College of London, researchers found that reliability plays a significant role in how our brains treat social information. Once again, a well-evolved capability of the brain is recruited to help us in a new situation. The dorsomedial prefrontal cortex is the area of the brain that keeps track of our social connections. This “social monitoring” ability of the brain worked in concert with ventromedial prefrontal cortex, an area that processes value estimates.

The researchers found that this part of our brain works like a Bayesian computer when considering incoming information. First we establish a “prior” that represents a model of what we believe to be true. Then we subject this prior to possible statistical updating based on new information – in this case, online reviews. If our confidence is high in this “prior” and the incoming information is weak, we tend to stick with our initial belief. But if our confidence is low and the incoming information is strong – i.e. a lot of positive reviews – then the brain overrides the prior and establishes a new belief, based primarily on the new information.

While this seems like common sense, the mechanisms at play are interesting. The brain effortlessly pattern matches new types of information and recruits the region that is most likely to have evolved to successfully interpret that information. In this case, the brain had decided that online reviews are most like information that comes from social sources. It combines the interpretation of this data with an algorithmic function that assigns value to the new information and calculates a new model – a new understanding of what we believe to be true. And it does all this “under the hood” – sitting just below the level of conscious thought.

Flow and the Machine

“In the future, either you’re going to be telling a machine what to do, or the machine is going to be telling you.”

Christopher Penn – VP of Marketing Technology, Shift Communications.

I often talk about the fallibility of the human brain – those irrational, cognitive biases that can cause us to miss the reality that’s right in front of our face. But there’s another side to the human brain – the intuitive, almost mystical machinations that happen when we’re on a cognitive roll, balancing gloriously on the edge between consciousness and subconciousness. Malcolm Gladwell took a glancing shot at this in his mega-bestseller: Blink. But I would recommend going right to the master of “Flow” – Mihaly Csikszentmihalyi (pronounced, if you’re interested – me-hi Chick-sent-me-hi). The Hungarian psychologist coined the term “flow” – referring to a highly engaged mental state where we’re completely absorbed with the work at hand. Csikszentmihalyi calls it the “psychology of optimal experience.”

It turns out there’s a pretty complicated neuroscience behind flow. In a blog post from gamer Adam Sinicki, he describes a state where the brain finds an ideal balance between instinctive behavior and total focus on one task. The state is called Transient Hypofrontality. It can sometimes be brought on by physical exercise. It’s why some people can think better while walking, or even jogging. The brain juggles resources required and this can force a stepping down of the prefrontal cortex, the part of the brain that causes us to question ourselves. This part of the brain is required in unfamiliar circumstances but in a situation where we’ve thoroughly rehearsed the actions required it’s actually better if it takes a break. This allows other – more intuitive – parts of the brain to come to the fore. And that may be the secret of “Flow.” It may also be the one thing that machines can’t replicate – yet.

The Rational Machine

If we were to compare the computer to a part of the brain, it would probably be the Prefrontal Cortex (PFC). When we talk about cognitive computing, what we’re really talking about is building a machine that can mimic – or exceed – the capabilities of the PFC. This is the home of our “executive function” – complex decision making, planning, rationalization and our own sense of self. It’s probably not a coincidence that the part of our brain we rely on to reason through complex challenges like designing artificial intelligence would build a machine in it’s own image. And in this instance, we’re damned close to surpassing ourselves. The PFC is an impressive chunk of neurobiology in its flexibility and power, but speedy it’s not. In fact, we’ve found that if we happen to make a mistake, the brain slows almost to a stand still. It shakes our confidence and kills any “flow” that might be happening in it’s tracks. This is what happens to athletes when they choke. With artificial intelligence, we are probably on the cusp of creating machines that can do most of what the PFC can do, only faster, more reliably and with the ability to process much more information.

But there’s a lot more to the brain than just the PFC. And it’s this ethereal intersection between ration and intuition where the essence of being human might be hiding.

The Future of Flow

What if we could harness “flow” at will? If we work in partnership with a machine that can crunch data in real time and present us with the inputs required to continue our flow-fueled exploration without the fear of making a mistake? It’s not so much a machine telling us what to do – or the reverse – as it is a partnership between human intuition and machine based rationalization. It’s analogous to driving a modern car, where the intelligent safety and navigation features backstop our ability to drive.

Of course, it may just be a matter of time before machines best us in this area as well. Perhaps machines already have mastered flow because they don’t have to worry about the consequences of making a mistake. But it seems to me that if humans have a future, it’s not going to be in our ability to crunch data and rationalize. We’ll have to find something a little more magical to stake our claim with.

 

 

The Status Quo Bias – Why Every B2B Vendor has to Understand It

It’s probably the biggest hurdle any B2B vendor has to get over. It’s called the Status Quo bias and it’s deadly in any high-risk purchase scenario. According to Wikipedia, the bias occurs when the current baseline (or status quo) is taken as a reference point, and any change from that baseline is perceived as a loss. In other words, if it ain’t broke don’t fix it. We believe that simply because something exists, it must have merit. The burden of proof then falls on the vendor to overcome this level of complacency

The Status Quo Bias is actually a bundle of other common biases, including the Endowment Effect, the Loss Aversion Bias, The Existence Bias, Mere Exposure effect and other psychological factors that tend to continually jam the cogs of B2B commerce. Why B2B? The Status Quo Bias is common in any scenario where risk is high and reward is low, but B2B in particular is subject to it because these are group-buying decisions. And, as I’ll soon explain, groups tend to default to Status Quo bias with irritating regularity. The new book from CEB (recently acquired by Gartner) – The Challenger Customer – is all about the status quo bias.

So why is the bias particularly common with groups? Think of the dynamics at play here. Generally speaking, most people have some level of the Status Quo Bias. Some will have it more than others, depending on their level of risk tolerance. But let’s look at what happens when we lump all those people together in a group and force them to come to a consensus. Generally, you’re going to have a one or two people in the group that are driving for change. Typically, these will be the ones that have the most to gain and have a risk tolerance threshold that allows the deal to go forward. On the other end of the spectrum you have some people who have low risk tolerance levels and nothing to gain. They may even stand to lose if the deal goes forward (think IT people who have to implement a new technology). In between you have the moderates. The gain factor and their risk tolerance levels net out to close to zero. Given that those that have something to gain will say yes and those who have nothing to gain will say no, it’s this middle group that will decide whether the deal will live or die.

Without the Status Quo bias, the deal might have a 50/50 chance. But the status quo bias stacks the deck towards negative outcomes for the vendor. Even if it tips the balance just a little bit towards “no” – that’s all that’s required to stop a deal dead in its tracks. The more disruptive the deal, the greater the Status Quo Bias. Let’s remember – this is B2B. There are no emotional rewards that can introduce a counter acting bias. It’s been shown in at least one study (Baker, Laury, Williams – 2008) that groups tend to be more risk averse than the individuals that make up that group. When the groups start discussing and – inevitably – disagreeing, it’s typically easier to do nothing.

So, how do we stick handle past this bias? The common approach is to divide and conquer – identifying the players and tailoring messages to speak directly to them. The counter intuitive finding of the CEB Challenger Customer research was that dividing and conquering is absolutely the wrong thing to do. It actually lessens the possibility of making a sale. While this sounds like it’s just plain wrong, it makes sense if we shift our perspective from the selling side to the buying side.

With our vendor goggles on, we believe that if we tailor messaging to appeal to every individual’s own value proposition, that would be a way to build consensus and drive the deal forward. And that would be true, if every member of our buying committee was acting rationally. But as we soon see when we put on the buying googles, they’re not. Their irrational biases are firmly stacked up on the “do nothing” side of the ledger. And by tailoring messaging in different directions, we’re actually just giving them more things to disagree about. We’re creating dysfunction rather than eliminating it. Disagreements almost always default back to the status quo, because it’s the least risky option. The group may not agree about much, but they can agree that the incumbent solution creates the least disruption.

So what do you do? Well, I won’t steal the CEB’s thunder here, because the Challenger Customer is absolutely worth a read if you’re a B2B vendor. The authors, Brent Adamson, Matthew Dixon, Pat Spenner and Nick Toman, lay out step by step strategy to get around the Status Quo bias. The trick is to create a common psychological frame where everyone can agree that doing nothing is the riskiest alternative. But biases are notoriously sticky things. Setting up a commonly understood frame requires a deep understanding of the group dynamics at play. The one thing I really appreciate about CEB’s approach is that it’s “psychologically sound.” They make no assumptions about buyer rationality. They know that emotions ultimately drive all human behavior and B2B purchases are no exception.

We’re Becoming Intellectually “Obese”

Humans are defined by scarcity. All our evolutionary adaptations tend to be built to ensure survival in harsh environments. This can sometimes backfire on us in times of abundance.

For example, humans are great at foraging. We have built-in algorithms that tell us which patches are most promising and when we should give up on the patch we’re in and move to another patch.

We’re also good at borrowing strategies that evolution designed for one purpose and applying them for another purpose. This is called exaptation. For example, we’ve exapted our food foraging strategies and applied them to searching for information in an online environment. We use these skills when we look at a website, conduct an online search or scan our email inbox. But as we forage for information – or food – we have to remember, this same strategy assumes scarcity, not abundance.

Take food for example. Nutritionally we have been hardwired by evolution to prefer high fat, high calorie foods. That’s because this wiring took place in an environment of scarcity, where you didn’t know where your next meal was coming from. High fat, high calorie and high salt foods were all “jackpots” if food was scarce. Eating these foods could mean the difference between life and death. So our brains evolved to send us a reward signal when we ate these foods. Subsequently, we naturally started to forage for these things.

This was all good when our home was the African savannah. Not so good when it’s Redondo Beach, there’s a fast food joint on every corner and the local Wal-Mart’s shelves are filled to overflowing with highly processed pre-made meals. We have “refined” food production to continually push our evolutionary buttons, gorging ourselves to the point of obesity. Foraging isn’t a problem here. Limiting ourselves is.

So, evolution has made humans good at foraging when things are scarce, but not so good at filtering in an environment of abundance. I suspect the same thing that happened with food is today happening with information.

Just like we are predisposed to look for food that is high in fats, salt and calories, we are drawn to information that:

  1. Leads to us having sex
  2. Leads to us having more than our neighbors
  3. Leads to us improving our position in the social hierarchy

All those things make sense in an evolutionary environment where there’s not enough to go around. But, in a society of abundance, they can cause big problems.

Just like food, for most of our history information was in short supply. We had to make decisions based on too little information, rather than too much. So most of our cognitive biases were developed to allow us to function in a setting where knowledge was in short supply and decisions had to be made quickly. In such an environment, these heuristic short cuts would usually end up working in our favor, giving us a higher probability of survival.

These evolutionary biases become dangerous as our information environment becomes more abundant. We weren’t built to rationally seek out and judiciously evaluate information. We were built to make decisions based on little or no knowledge. There is an override switch we can use if we wish, but it’s important to know that just like we’re inherently drawn to crappy food, we’re also subconsciously drawn to crappy information.

Whether or not you agree with the mainstream news sources, the fact is that there was a thoughtful editorial process, which was intended to improve the quality of information we were provided. Entire teams of people were employed to spend their days rationally thinking about gathering, presenting and validating the information that would be passed along to the public. In Nobel laureate Daniel Kahneman’s terminology, they were “thinking slow” about it. And because the transactional costs of getting that information to us was so high, there was a relatively strong signal to noise ratio.

That is no longer the case. Transactional costs have dropped to the point that it costs almost nothing to get information to us. This allows information providers to completely bypass any editorial loop and get it in front of us. Foraging for that information is not the problem. Filtering it is. As we forage through potential information “patches” – whether they be on Google, Facebook or Twitter – we tend to “think fast” – clicking on the links that are most tantalizing.

I would have never dreamed that having too much information could be a bad thing. But most of the cautionary columns that I’ve written about in the last few years all seem to have the same root cause – we’re becoming intellectually “obese.” We’ve developed an insatiable appetite for fast, fried, sugar-frosted information.

 

Damn You Technology…

Quit batting your seductive visual sensors at me. You know I can’t resist. But I often wonder what I’m giving up when I give in to your temptations. That’s why I was interested in reading Tom Goodwin’s take on the major theme at SXSW – the Battle for Humanity. He broke this down into three sub themes. I agree with them. In fact, I’ve written on all of them in the past. They were:

Data Trading – We’re creating a market for data. But when you’re the one that generated that data, who should own it?

Shift to No Screens – an increasing number of connected devices will change of concept of what it means to be online.

Content Tunnel Vision – As the content we see is increasingly filtered based on our preferences, what does that do for our perception of what is real?

But while we’re talking about our imminent surrender to the machines, I feel there are some other themes that also merit some discussion. Let’s limit it to two today.

A New Definition of Connection and Community

sapolsky

Robert Sapolsky

A few weeks ago I read an article that I found fascinating by neuroendocrinologist and author Robert Sapolsky. In it, he posits that understanding Capgras Syndrome is the key to understanding the Facebook society. Capgras, first identified by French psychiatrist Joseph Capgras, is a disorder where we can recognize a face of a person but we can’t retrieve feelings of familiarity. Those afflicted can identify the face of a loved one but swear that it’s actually an identical imposter. Recognition of a person and retrieval of emotions attached to that person are handled by two different parts of the brain. When the connection is broken, Capgras Syndrome is the result.

This bifurcation of how we identify people is interesting. There is the yin and yang of cognition and emotion. The fusiform gyrus cognitively “parses” the face and then the brain retrieves the emotions and memories that are associated with it. To a normally functioning brain, it seems seamless and connected, but because two different regions (or, in the case of emotion, a network of regions) are involved, they can neurologically evolve independently of each other. And in the age of Facebook, that could mean a significant shift in the way we recognize connections and create “cognitive communities.” Sapolsky elaborates:

Through history, Capgras syndrome has been a cultural mirror of a dissociative mind, where thoughts of recognition and feelings of intimacy have been sundered. It is still that mirror. Today we think that what is false and artificial in the world around us is substantive and meaningful. It’s not that loved ones and friends are mistaken for simulations, but that simulations are mistaken for them.

As I said in a column a few months back, we are substituting surface cues for familiarity. We are rushing into intimacy without all the messy, time consuming process of understanding and shared experience that generally accompanies it.

Brains do love to take short cuts. They’re not big on heavy lifting. Here’s another example of that…

Free Will is Replaced with An Algorithm

harari

Yuval Harari

In a conversation with historian Yuval Harari, author of the best seller Sapiens, Derek Thompson from the Atlantic explored “The Post Human World.” One of the topics they discussed was the End of Individualism.

Humans (or, at least, most humans) have believed our decisions come from a mystical soul – a transcendental something that lives above our base biology and is in control of our will. Wrapped up in this is the concept of us as an individual and our importance in the world as free thinking agents.

In the past few decades, there is a growing realization that our notion of “free will” is just the result of a cascade of biochemical processes. There is nothing magical here; there is just a chain of synaptic switches being thrown. And that being the case – if a computer can process things faster than our brains, should we simply relegate our thinking to a machine?

In many ways, this is already happening. We trust Google Maps or our GPS device more than we trust our ability to find our own way. We trust Google Search more than our own memory. We’re on the verge of trusting our wearable fitness tracking devices more than our own body’s feedback. And in all these cases, our trust in tech is justified. These things are usually right more often than we are. But when it comes to humans vs, machines, they represent a slippery slope that we’re already well down. Harari speculates what might be at the bottom:

What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you.

When I lay awake worrying about technology, these are the types of things that I think about. The big question is – is humanity an outmoded model? The fact is that we evolved to be successful in a certain environment. But here’s the irony in that: we were so successful that we changed that environment to one where it was the tools we’ve created, not the creators, which are the most successful adaptation. We may have made ourselves obsolete. And that’s why really smart humans, like Bill Gates, Elon Musk and Stephen Hawking are so worried about artificial intelligence.

“It would take off on its own, and re-design itself at an ever increasing rate,” said Hawking in a recent interview with BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Worried about a machine taking your job? That may be the least of your worries.

 

 

You’ve got a Friend in Me – Our Changing Relationship with A.I.

Since Siri first stepped into our lives in 2011, we’re being introduced to more and more digital assistants. We’ve met Amazon’s Alexa, Microsoft’s Cortana and Google’s Google Now. We know them, but do we love them?

Apparently, it’s important that we bond with said digital assistants and snappy comebacks appear to be the surest path to our hearts. So, if you ask Siri if she has a boyfriend, she might respond with, “Why? So we can get ice cream together, and listen to music, and travel across galaxies, only to have it end in slammed doors, heartbreak and loneliness? Sure, where do I sign up?” It seems to know a smart-assed digital assistant is to love her – but just be prepared for that love to be unrequited.

Not to be outdone, Google is also brushing up on its witty repartee for it’s new Digital Assistant – thanks to some recruits from the Onion and Pixar. A recent Mediapost article said that Google had just assembled a team of writers from those two sources – tapping the Onion for caustic sarcasm and Pixar for a gentler, more human touch.

But can we really be friends with a machine, even if it is funny?

Microsoft thinks so. They’ve unveiled a new chatbot in China called Xiaoice (pronounced Shao-ice). Xiaoice takes on the persona of a 17 year old girl that responds to questions like “How would you like others to comment on you when you die one day?” with the plaintiff “The world would not be much different without me.” Perhaps this isn’t as clever as Siri’s comebacks, but there’s an important difference: Siri’s responses were specifically scripted to respond to anticipated question; while Xiaoice actually talks with you by using true artificial intelligence and linguistic processing.

In a public test on WeChat, Xiaoice received 1.5 million chat group invitations in just 72 hours. As of earlier this year, she had had more than 10 billion conversations. In a blog post, Xiaoice’s “father”, Yongdong Wang, head of the Microsoft Application and Services Group East Asia, said, “Many see Xiaoice as a partner and friend, and are willing to confide in her just as they do with their human friends. Xiaoice is teaching us what makes a relationship feel human, and hinting at a new goal for artificial intelligence: not just analyzing databases and driving cars, but making people happier.”

When we think of digital assistants, we naturally think of the advantages that machines have over humans: unlimited memory, access to the entire web, vastly superior number crunching skills and much faster processing speeds. This has led to “cognitive offloading” – humans transferring certain mental processing tasks to machines. We now trust Google more than our own memory for retrieving information – just as we trust calculators more than our own limited mathematical abilities. But there should be some things that humans are just better at. Being human, for instance. We should be more empathetic – better able to connect with other people. A machine shouldn’t “get us” better than our spouse or best friend.

For now, that’s probably still true. But what if you don’t have a spouse, or even a best friend? Is having a virtual friend better than nothing at all? Recent studies have shown that robotic pets seem to ease loneliness with isolated seniors. More research is needed, but it’s not really surprising to learn that a warm, affectionate robot is better than nothing at all. What was surprising was that in one study, seniors preferred a robotic dog to the real thing.

The question remains, however: Can we truly have a relationship with a machine? Can we feel friendship – or even love – when we know that the machine can’t do the same? This goes beyond the high-tech flirtation of discovering Siri’s or Google’s “easter egg” responses to something more fundamental. It’s touching on what appears to be happening in China, where millions are making a chatbot their personal confident. I suspect there are more than a few lonely Chinese who would consider Xiaoice their best friend.

And – on many levels – that scares the hell out of me.