Risk, Reward and the Rebound Market

Twelve years ago, when looking at B2B purchases and buying behaviors, I talked about a risk/reward matrix. I put forward the thought that all purchases have an element of risk and reward in them. In understanding the balance between those two, we can also understand what a buyer is going through.

At the time, I was saying how many B2B purchases have low reward but high risk. This explains the often-arduous B2B buying process, involving RFPs, approved vendor lists, many levels of sign off and a nasty track record of promising prospects suddenly disappearing out of a vendors lead pipeline. It was this mystifying marketplace that caused us to do a large research investigation into B2B buying and lead to me writing the book, The Buyersphere Project: How Businesses Buy from Businesses in the Digital Marketplace.

When I wrote about the matrix right here on Mediapost back then, there were those that said I had oversimplified buying behavior – that even the addition of a third dimension would make the model more accurate and more useful. Better yet, do some stat crunching on realtime data, as suggested by Andre Szykier:

“Simple StatPlot or SPSS in the right hands is the best approach rather than simplistic model proposed in the article.”

Perhaps, but for me, this model still serves as a quick and easy way to start to understand buyer behavior. As British statistician George P. Box once said, “All models are wrong, but some are useful.”

Fast forward to the unusual times we now find ourselves in. As I have said before, as we emerge from a forced 2-year hiatus from normal, it’s inevitable that our definitions of risk and reward in buying behaviors might have to be updated. I was reminded of this when I was last week’s commentary – “Cash-Strapped Consumers Seek Simple Pleasures” by Aaron Paquette. He starts by saying, “With inflation continuing to hover near 40-year highs, consumers seek out savings wherever they can find them — except for one surprising segment.”

Surprising? Not when I applied the matrix. It made perfect sense. Paquette goes on,

“Consumers will trade down for their commodities, but they pay up for their sugar, caffeine or cholesterol fix. They’re going without new clothes or furniture, and buying the cheapest pantry staples, to free scarce funds for a daily indulgence. Starbucks lattes aren’t bankrupting young adults — it’s their crushing student loans. And at a time when consumers face skyrocketing costs for energy, housing, education and medical care, they find that a $5 Big Mac, Frappuccino, or six pack of Coca-Cola is an easy way to “treat yo self.”

I have talked before about what we might expect as the market puts a global pandemic behind us. The concepts of balancing risk and reward are very much at the heart of our buying behaviors. Sociologist Nicholas Christakis explores this in his book Apollo’s Arrow. Right now, we’re in a delicate transition time. We want to reward ourselves but we’re still highly risk averse. We’re going to make purchases that fall into this quadrant of the matrix.

This is a likely precursor to what’s to come, when we move into reward seeking with a higher tolerance of risk. Christakis predicts this to come sometime in 2024: “What typically happens is people get less religious. They will relentlessly seek out social interactions in nightclubs and restaurants and sporting events and political rallies. There’ll be some sexual licentiousness. People will start spending their money after having saved it. They’ll be joie de vivre and a kind of risk-taking, a kind of efflorescence of the arts, I think.”

The consumer numbers shared by Paquette shows we’re dipping our toes into the waters of hedonism . The party hasn’t started yet but we are more than ready to indulge ourselves a little with a reward that doesn’t carry a lot of risk.

Dealing with Daily Doom

“We are Doomed”

The tweet came yesterday from a celebrity I follow. And you know what? I didn’t even bother to look to find out in which particular way we were doomed. That’s probably because my social media feeds are filled by daily predictions of doom. The end being nigh has ceased to be news. It’s become routine. That is sad. But more than that, it’s dangerous.

This is why Joe Mandese and I have agreed to disagree about the role media can play in messaging around climate change , or – for that matter – any of the existential threats now facing us. Alarmist messaging could be the problem, not the solution.

Mandese ended his post with this:

“What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

Joe Mandese – Mediapost

But here’s the thing. Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.

Something called “doom scrolling” is now very much a thing. And if you’re looking for Doomsday scenarios, the best place to start is the Subreddit r/collapse thread.

In a 30 second glimpse during the writing of this column, I discovered that democracy is dying, America is on the brink of civil war, Russia is turning off the tap on European oil supplies, we are being greenwashed into complacency, the Amazon Rainforest may never recover from its current environmental destruction and the “Doomsday” glacier is melting faster than expected. That was all above-the-fold. I didn’t even have to scroll for this buffet of all-you-can eat disaster. These were just the appetizers.

There is a reason why social media feeds are full of doom. We are hardwired to pay close attention to threats. This makes apocalyptic prophesizing very profitable for social media platforms. As British academic Julia Bell said in her 2020 book, Radical Attention,

“Behind the screen are impassive algorithms designed to ensure that the most outrageous information gets to our attention first. Because when we are enraged, we are engaged, and the longer we are engaged the more money the platform can make from us.”

Julia Bell – Radical Attention

But just what does a daily diet of doom do for our mental health? Does constantly making us aware of the impending end of our species goad us into action? Does it actually accomplish anything?

Not so much. In fact, it can do the opposite.

Mental health professionals are now treating a host of new climate change related conditions, including eco-grief, eco-anxiety and eco-depression. But, perhaps most alarmingly, they are now encountering something called eco-paralysis.

In an October 2020 Time.com piece on doom scrolling, psychologist Patrick Kennedy-Williams, who specializes in treating climate-related anxieties, was quoted, ““There’s something inherently disenfranchising about someone’s ability to act on something if they’re exposed to it via social media, because it’s inherently global. There are not necessarily ways that they can interact with the issue.” 

So, cranking up the intensity of the messaging on existential threats such as climate change may have the opposite effect, by scaring us into doing nothing. This is because of something called Yerkes-Dodson Law.

By Yerkes and Dodson 1908 – Diamond DM, et al. (2007). “The Temporal Dynamics Model of Emotional Memory Processing: A Synthesis on the Neurobiological Basis of Stress-Induced Amnesia, Flashbulb and Traumatic Memories, and the Yerkes-Dodson Law”. Neural Plasticity: 33. doi:10.1155/2007/60803. PMID 17641736., CC0, https://commons.wikimedia.org/w/index.php?curid=34030384

This “Law”, discovered by psychologists Robert Yerkes and John Dodson in 1908, isn’t so much a law as a psychological model. It’s a typical bell curve. On the front end, we find that our performance in responding to a situation increases along with our attention and interest in that situation. But the line does not go straight up. At some point, it peaks and then goes downhill. Intent gives way to anxiety. The more anxious we become, the more our performance is impaired.

When we fret about the future, we are actually grieving the loss of our present. In this process, we must make our way through the 5 stages of grief introduced by psychiatrist Elisabeth Kübler-Ross in 1969 through her work with terminally ill patients. The stages are: Denial, Anger, Bargaining, Depression and Acceptance.

One would think that triggering awareness would help accelerate us through the stages. But there are a few key differences. In dealing with a diagnosis of terminal illness, typically there is one hammer-blow event when you become aware of the situation. From there dealing with it begins. And – even when it begins – it’s not a linear journey. As anyone who has ever grieved will tell you, what stage you’re in depends on which day I’m talking to you. You can slip for Acceptance to Anger in a heartbeat.

With climate change, awareness doesn’t come just once. The messaging never ends. It’s a constant cycle of crisis, trapping us in a loop that cycles between denial, depression and despair.

An excellent post on Climateandmind.org on climate grief talks about this cycle and how we get trapped within it. Some of us get stuck in a stage and never move on. Even climate scientist and activist Susanne Moser admits to being trapped in something she calls Functional Denial,

“It’s that simultaneity of being fully aware and conscious and not denying the gravity of what we’re creating (with Climate Change), and also having to get up in the morning and provide for my family and fulfill my obligations in my work.”

Susan Moser

It’s exactly this sense of frustration I voiced in my previous post. But the answer is not making me more aware. Like Moser, I’m fully aware of the gravity of the various threats we’re facing. It’s not attention I lack, it’s agency.

I think the time to hope for a more intense form of messaging to prod the deniers into acceptance is long past. If they haven’t changed their minds yet, they ain’t goin’ to!

I also believe the messaging we need won’t come through social media. There’s just too much froth and too much profit in that froth.

What we need – from media platforms we trust – is a frank appraisal of the worst-case scenario of our future. We need to accept that and move on to deal with what is to come. We need to encourage resilience and adaptability. We need hope that while what is to come is most certainly going to be catastrophic, it doesn’t have to be apocalyptic.

We need to know we can survive and start thinking about what that survival might look like.

The Tricky Timing Of Being Amazed By The Future

When I was a kid, the future was a big deal. The cartoon the Jetson’s was introduced in 1962. We were in the thick of the space race. Science was doing amazing things. What the future might look like was the theme of fairs and exhibits around the world, including my little corner of the world in Western Canada. I remember going to an exhibit about the Amazing World of Tomorrow at the Calgary Stampede when I was 7 or 8, so either in 1968 or 1969.

Walt Disney was also a big fan of the future. That’s why you have Tomorrowland at Disneyland in Anaheim, California and Epcot at Disneyworld in Kissimmee, Florida. Disney mused, “Tomorrow can be a wonderful age. Our scientists today are opening the doors of the Space Age to achievements that will benefit our children and generations to come. The Tomorrowland attractions have been designed to give you an opportunity to participate in adventures that are a living blueprint of our future.”

But the biggest problem with Tomorrowland is that the future kept becoming the present and – in doing so – it became no big deal. The first Tomorrowland opened in 1955 and the “future” it envisioned was 1986. From then forward, Disney has continually tried to keep Tomorrowland from becoming Yesterdayland. It was an example of just how short the shelf life of “Tomorrow” actually is.

For example, in 1957, the Monsanto House of the Future was introduced in California’s Tomorrowland. The things that amazed then were microwave ovens and television remote controls. The amazement factor on these two things didn’t last very long. But even so, they lasted longer than the Viewliner – “the fastest miniature train in the world.” That Tomorrowland attraction lasted just one year.

Oh, and then there was the video phone.

In the 1950’s and 60’s, we were fascinated by idea of having a video call with someone. I remember seeing a videophone demonstrated at the fair I went to as a kid. It was probably the AT&T Picturephone, which was introduced at the 1964 New York World’s Fair.  We were all suitably amazed.

But the Picturephone wasn’t really new. Bell Labs had been working on it since 1927. A large screen videophone was shown in Charlie Chaplin’s 1936 film, Modern Times. Even with this decades long runup, when AT&T tried to make it commercially viable in 1970, it was a dismal failure.  This just shows how fragile the timing is with trying to bring the future to today. If it’s too soon, everyone is scared to adopt it. If it’s too late, it’s boring. More than anything, our appreciation of the future comes down to a matter of luck.

Here are a few more examples. Yesterday, I got a call on my mobile when I couldn’t get to my phone, so I answered it on my Apple Watch. My father-in-law happened to be with me. “You answered the phone on your watch? Now I’ve seen everything!” He was amazed, but for me it was commonplace. If we backtrack to 1946, when the comic book character Dick Tracy introduced his wrist radio, it was almost unimaginably cool. Well, it was unimaginable to everyone but inventor Al Gross, who had actually built such a device. That’s where Tracy’s creator, Chester Gould, got the idea from.

Or teleconferencing. Today, in our post-COVID world, Zoom meetings are the norm, even mundane. But the technology we today take for granted has been 150 years in the making. It was in the 1870’s Bell Labs (again) first came up with the idea of transmitting both an image and audio over wire.

Like most things, the tricky timing of our relationship with the future is a product of how our brains work. We use our remembered past as the springboard to try to imagine the future. And our degree of amazement depends on how big the gap is between the two.

In the 1950’s, H.M. (research patients were usually known only by their initials) was a patient who suffered from epilepsy. He underwent an experimental surgery that removed several parts of his brain, including his entire hippocampus, which is vital for memory. In that surgery, H.M. not only lost his past, but he also became unable to imagine the future. Since then, functional MRI studies have found that the same parts of the brain are involved in both retrieving memories and in imagining the future.

In both these instances, the brain creates a scene. If it’s in the past, we relive a memory, often with questionable fidelity to what actually happened. Our memories are notoriously creative at filling in gaps in our memory with things we just make up. And if it’s in the future, we prelive the scene, using what we know to build what the future might look like.

How amazing the future is to us depends on the gap between what we know and what we’re able to imagine. The bigger the gap that we’re able to manage, the more we’re amazed. But as the future becomes today, the gap narrows dramatically, and the amazement drops accordingly. Adoption of new technologies depends in part on being able to squeeze through this rapidly narrowing window. If the window is too big, we aren’t willing to take on the risks involved. If the window is too small, there’s not enough of an advantage for us to adopt the future technology.

Even with this challenge of timing, the future is relentless. It comes to us in wave after wave, passing from being amazing to boring. In the process, we sometimes have to look back to realize how far we’ve come.

I was thinking about that and about the 7-year-old boy I was, standing looking at the Picturephone at the Calgary Stampede in 1968. As amazing as it seemed to me at the time, how could I possibly imagine the world I live in today, a little over a half century later?

With Digital Friends Like These, Who Needs Enemies?

Recently, I received an email from Amazon that began:

“You’re amazing. Really, you’re awesome! Did that make you smile? Good. Alexa is here to compliment you. Just say, ‘Alexa, compliment me’”

“What,” I said to myself, “sorry-assed state is my life in that I need to depend on a little black electronic hockey puck to affirm my self-worth as a human being?”

I realize that the tone of the email likely had tongue at least part way implanted in cheek, but still, seriously – WTF Alexa? (Which, incidentally, Alexa also has covered. Poise that question and Alexa responds – “I’m always interested in feedback.”)

My next thought was, maybe I think this is a joke, but there are probably people out there that need this. Maybe their lives are dangling by a thread and it’s Alexa’s soothing voice digitally pumping their tires that keeps them hanging on until tomorrow. And – if that’s true – should I be the one to scoff at it?

I dug a little further into the question, “Can we depend on technology for friendship, for understanding, even – for love?”

The answer, it turns out, is probably yes.

A few studies have shown that we will share more with a virtual therapist than a human one in a face-to-face setting. We feel heard without feeling judged.

In another study, patients with a virtual nurse ended up creating a strong relationship with it that included:

  • Using close forms of greeting and goodbye
  • Expressing happiness to see the nurse
  • Using compliments
  • Engaging in social chat
  • And expressing a desire to work together and speak with the nurse again

Yet another study found that robots can even build a stronger relationship with us by giving us a pat on the hand or touching our shoulder. We are social animals and don’t do well when we lose that sociability. If we go too long without being touched, we experience something called “skin hunger” and start feeling stressed, depressed and anxious. The use of these robots is being tested in senior’s care facilities to help combat extreme loneliness.

In reading through these studies, I was amazed at how quickly respondents seemed to bond with their digital allies. We have highly evolved mechanisms that determine when and with whom we seem to place trust. In many cases, these judgements are based on non-verbal cues: body language, micro-expressions, even how people smell. It surprised me that when our digital friends presented none of these, the bonds still developed. In fact, it seems they were deeper and stronger than ever!

Perhaps it’s the very lack of humanness that is the explanation. As in the case of the success of a virtual therapist, maybe these relationships work because we can leave the baggage of being human behind. Virtual assistants are there to serve us, not judge or threaten us. We let our guards down and are more willing to open up.

Also, I suspect that the building blocks of these relationships are put in place not by the rational, thinking part of our brains but the emotional, feeling part. It’s been shown that self-affirmation works by activating the reward centers of our brain, the ventral striatum and ventromedial prefrontal cortex. These are not pragmatic, cautious parts of our cognitive machinery. As I’ve said before, they’re all gas and no brakes. We don’t think a friendship with a robot is weird because we don’t think about it at all, we just feel better. And that’s enough.

AI companionship seems a benign – even beneficial use of technology – but what might the unintended consequences be? Are we opening ourselves up to potential dangers by depending on AI for our social contact – especially when the lines are blurred between for-profit motives and affirmation we become dependent on.

In therapeutic use cases of virtual relationships as outlined up to now, there is no “for-profit” motive. But Amazon, Apple, Facebook, Google and the other providers of consumer directed AI companionship are definitely in it for the money. Even more troubling, two of those – Facebook and Google – depend on advertising for their revenue. Much as this gang would love us to believe that they only have our best interests in mind – over $1.2 trillion in combined revenue says otherwise. I suspect they have put a carefully calculated price on digital friendship.

Perhaps it’s that – more than anything – that threw up the red flags when I got that email from Amazon. It sounded like it was coming from a friend, and that’s exactly what worries me.

Does Social Media “Dumb Down” the Wisdom of Crowds?

We assume that democracy is the gold standard of sustainable political social contracts. And it’s hard to argue against that. As Winston Churchill said, “democracy is the worst form of government – except for all the others that have been tried.”

Democracy may not be perfect, but it works. Or, at least, it seems to work better than all the other options. Essentially, democracy depends on probability – on being right more often than we’re wrong.

At the very heart of democracy is the principle of majority rule. And that is based on something called Jury Theorem, put forward by the Marquis de Condorcet in his 1785 work, Essay on the Application of Analysis to the Probability of Majority Decisions. Essentially, it says that the probability of making the right decision increases when you average the decisions of as many people as possible. This was the basis of James Suroweicki’s 2004 book, The Wisdom of Crowds.

But here’s the thing about the wisdom of crowds – it only applies when those individual decisions are reached independently. Once we start influencing each other’s decision, that wisdom disappears. And that makes social psychologist Solomon Asch’s famous conformity experiments of 1951 a disturbingly significant fly in the ointment of democracy.

You’re probably all aware of the seminal study, but I’ll recap anyway. Asch gathered groups of people and showed them a card with three lines of obviously different lengths. Then he asked participants which line was the closest to the reference line. The answer was obvious – even a toddler can get this test right pretty much every time.

But unknown to the test subject, all the rest of the participants were “stooges” – actors paid to sometimes give an obviously incorrect answer. And when this happened, Asch was amazed to find that the test subjects often went against the evidence of their own eyes just to conform with the group. When wrong answers were given, a third of the subjects always conformed, 75% of the subjects conformed at least once, and only 25% stuck to the evidence in front of them and gave the right answer.

The results baffled Asch. The most interesting question to him was why this was happening. Were people making a decision to go against their better judgment – choosing to go with the crowd rather than what they were seeing with their own eyes? Or was something happening below the level of consciousness? This was something Solomon Asch wondered about right until his death in 1996. Unfortunately, he never had the means to explore the question further.

But, in 2005, a group of researchers at Emory University, led by Gregory Berns, did have a way. Here, Asch’s experiment was restaged, only this time participants were in a fMRI machine so Bern and his researchers could peak at what was actually happening in their brains. The results were staggering.

They found that conformity actually changes the way our brain works. It’s not that we change what we say to conform with what others are saying, despite what we see with our own eyes. What we see is changed by what others are saying.

If, Berns and his researchers reasoned, you were consciously making a decision to go against the evidence of your own eyes just to conform with the group, you should see activity in the frontal areas of our brain that are engaged in monitoring conflicts, planning and other higher-order mental activities.

But that isn’t what they found. In those participants that went along with obviously incorrect answers from the group, the parts of the brain that showed activity were only in the posterior parts of the brain – those that control spatial awareness and visual perception. There was no indication of an internal mental conflict. The brain was actually changing how it processed the information it was receiving from the eyes.

This is stunning. It means that conformity isn’t a conscious decision. Our desire to conform is wired so deeply in our brains, it actually changes how we perceive the world. We never have the chance to be objectively right, because we never realize we’re wrong.

But what about those that went resisted conformity and stuck to the evidence they were seeing with their own eyes? Here again, the results were fascinating. The researchers found that in these cases, they saw a spike of activity in the right amygdala and right caudate nucleus – areas involved in the processing of strong emotions, including fear, anger and anxiety. Those that stuck to the evidence of their own eyes had to overcome emotional hurdles to do so. In the published paper, the authors called this the “pain of independence.”

This study highlights a massively important limitation in the social contract of democracy. As technology increasingly imposes social conformity on our culture, we lose the ability to collectively make the right decision. Essentially, is shows that this effect not only erases the wisdom of crowds, but actively works against it by exacting an emotional price for being an independent thinker.

The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

The Physical Foundations of Friendship

It’s no secret that I worry about what the unintended consequences might be for us as we increasingly substitute a digital world for a physical one. What might happen to our society as we spend less time face-to-face with people and more time face-to-face with a screen?

Take friendship, for example. I have written before about how Facebook friends and real friends are not the same thing. A lot of this has to do with the mental work required to maintain a true friendship. This cognitive requirement led British anthropologist Robin Dunbar to come up with something called Dunbar’s Number – a rough rule-of-thumb that says we can’t really maintain a network of more than 150 friends, give or take a few.

Before you say, “I have way more friends on Facebook than that,” realize that I don’t care what your Facebook Friend count is. Mine numbers at least 3 times more than Dunbar’s 150 limit. But they are not all true friends. Many are just the result of me clicking a link on my laptop. It’s quick, it’s easy, and there is absolutely no requirement to put any skin in the game. Once clicked, I don’t have to do anything to maintain these friendships. They are just part of a digital tally that persists until I might click again, “unfriending” them. Nowhere is the ongoing physical friction that demands the maintenance required to keep a true friendship from slipping into entropy.

So I was wondering – what is that magical physical and mental alchemy that causes us to become friends with someone in the first place? When we share physical space with another human, what is the spark that causes us to want to get to know them better? Or – on the flip side – what are the red flags that cause us to head for the other end of the room to avoid talking to them? Fortunately, there is some science that has addressed those questions.

We become friends because of something in sociology call homophily – being like each other. In today’s world, that leads to some unfortunate social consequences, but in our evolutionary environment, it made sense. It has to do with kinship ties and what ethologist Richard Dawkins called The Selfish Gene. We want family to survive to pass on our genes. The best way to motivate us to protect others is to have an emotional bond to them. And it just so happens that family members tend to look somewhat alike. So we like – or love – others who are like us.

If we tie in the impact of geography over our history, we start to understand why this is so. Geography that restricted travel and led to inbreeding generally dictated a certain degree of genetic “sameness” in our tribe. It was a quick way to sort in-groups from out-groups. And in a bloodier, less politically correct world, this was a matter of survival.

But this geographic connection works both ways. Geographic restrictions lead to homophily, but repeated exposure to the same people also increases the odds that you’ll like them. In psychology, this is called mere-exposure effect.

In these two ways, the limitations of a physical world has a deep, deep impact on the nature of friendship. But let’s focus on the first for a moment. 

It appears we have built-in “friend detectors” that can actually sense genetic similarities. In a rather fascinating study, Nicholas Christakis and James Fowler found that friends are so alike genetically, they could actually be family. If you drill down to the individual building blocks of a gene at the nucleotide level, your friends are as alike genetically to you as your fourth cousin. As Christakis and Fowler say in their study, “friends may be a kind of ‘functional kin’.”

This shows how deeply friendships bonds are hardwired into us. Of course, this doesn’t happen equally across all genes. Evolution is nothing if not practical. For example, Christakis and Fowler found that specific systems do stay “heterophilic” (not alike) – such as our immune system. This makes sense. If you have a group of people who stay in close proximity to each other, it’s going to remain more resistant to epidemics if there is some variety in what they’re individually immune to. If everyone had exactly the same immunity profile, the group would be highly resistant to some bugs and completely vulnerable to others. It would be putting all your disease prevention eggs in one basket.

But in another example of extreme genetic practicality, how similar we smell to our friends can be determined genetically.  Think about it. Would you rather be close to people who generally smell the same, or those that smell different? It seems a little silly in today’s world of private homes and extreme hygiene, but when you’re sharing very close living quarters with others and there’s no such thing as showers and baths, how everyone smells becomes extremely important.

Christakis and Fowler found that our olfactory sensibilities tend to trend to the homophilic side between friends. In other words, the people we like smell alike. And this is important because of something called olfactory fatigue. We use smell as a difference detector. It warns us when something is not right. And our nose starts to ignore smells it gets used to, even offensive ones. It’s why you can’t smell your own typical body odor. Or, in another even less elegant example, it’s why your farts don’t stink as much as others. 

Given all this, it would make sense that if you had to spend time close to others, you would pick people who smelled like you. Your nose would automatically be less sensitive to their own smells. And that’s exactly what a new study from the Weizmann Institute of Science found. In the study, the scent signatures of complete strangers were sampled using an electronic sniffer called an eNose. Then the strangers were asked to engage in nonverbal social interactions in pairs. After, they were asked to rate each interaction based on how likely they would be to become friends with the person. The result? Based on their smells alone, the researchers were able to predict with 71% accuracy who would become friends.

The foundations of friendship run deep – down to the genetic building blocks that make us who we are. These foundations were built in a physical world over millions of years. They engage senses that evolved to help us experience that physical world. Those foundations are not going to disappear in the next decade or two, no matter how addictive Facebook or TikTok becomes. We can continue to layer technology over these foundations, but to deny them it to ignore human nature.

As the “Office” Goes, What May Go With It?

In 2017, Apple employees moved into the new Apple headquarters, called the Ring, in Cupertino, California. This was the last passion project of Steve Jobs, who personally made the pitch to Cupertino City Council just months before he passed away. And its design was personally overseen by Apple’s then Chief Design Office Jony Ive. The new headquarters were meant to give Apple’s Cupertino employees the ultimate “sense of place”. They were designed to be organic and flexible, evolving to continue to meet their needs.

Of course, no one saw a global pandemic in the future. COVID-19 drove almost all those employees to work from home. The massive campus sat empty. And now, as Apple tries to bring everyone back to the Ring, it seems what has evolved is the expectations of the employees, who have taken a hard left turn away from the very idea of “going to work.”

Just last month, Apple had to backtrack on its edict demanding that everyone start coming back to the office three days a week. A group which calls itself “Apple Together” published a letter asking for the company to embrace a hybrid work schedule that formalized a remote workplace. And one of Apple’s leading AI engineers, Ian Goodfellow, resigned in May because of Apple’s insistence on going back to the office.

Perhaps Apple’s Ring is just the most elegant example of a last-gasp concept tied to a generation that is rapidly fading from the office into retirement. The Ring could be the world’s biggest and most expensive anachronism. 

The Virtual Workplace debate is not new for Silicon Valley. Almost a decade ago, Marissa Mayer also issued a “Back to the Office” edict when she came from Google to take over the helm at Yahoo. A company memo laid out the logic:

“To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side. That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings. Speed and quality are often sacrificed when we work from home. We need to be one Yahoo!, and that starts with physically being together.”

Marissa Mayer, Yahoo Company Memo

The memo was not popular with Yahooligans. I was still making regular visits to the Valley back then and heard first-hand the grumblings from some of them. My own agency actually had a similar experience, albeit on a much smaller scale.

Over the past decade – until COVID – employees and employers have tentatively tested the realities of a remote workplace. But in the blink of an eye, the pandemic turned this ongoing experiment into the only option available. If businesses wanted to continue operating, they had to embrace working from home. And if employees wanted to keep their jobs, they had to make room on the dining room table for their laptop. Overnight, Zoom meetings and communicating through Slack became the new normal.

Sometimes, necessity is the mother of adoption. And with a 27 (and counting) month runway to get used to it, it appears that the virtual workplace is here to stay.

In some ways, the virtual office represents the unbundling of our worklife. Because our world was constrained by physical limitations of distance, we tended to deal with a holistic world. Everything came as a package that was assembled by proximity. We operated inside an ecosystem that shared the same physical space. This was true for almost everything in our lives, including our jobs. The workplace was a place, with physical and social properties that existed within that place.

But technology allows us to unbundle that experience. We can separate work from place. We pick and choose what seems to be the most important things we need to do our jobs and take it with us, free from the physical restraints that once kept us all in the same place in the same time. In that process, there are both intended and unintended consequences.

On the face of it, freeing our work from its physical constraints (when this is possible) makes all kinds of sense. For the employer, it eliminates the need for maintaining a location, along with the expense of doing so. And, when you can work anywhere, you can also recruit from anywhere, dramatically opening up the talent pool.

For the employee, it’s probably even more attractive. You can work on your schedule, giving you more flexibility to maintain a healthy work-life balance. Long and frustrating commutes are eliminated. Your home can be wherever you want to live, rather than where you have to live because of your job.

Like I said, when you look at all these intended consequences, a virtual workplace seems to be all upside, with little downside. However, the downsides are starting to show through the cracks created by the unintended consequences.

To me, this seems somewhat analogous to the introduction of monoculture agriculture. You could say this also represented the unbundling of farming for the sake of efficiency. Focusing on one crop in one place in a time made all kinds of sense. You could standardize planting, fertilizing, watering and harvesting based on what was best for the chosen crop. It allowed for the introduction of machinery, increasing yields and lowering costs. Small wonder that over the past 2 centuries – and especially since World War II – the world rushed to embrace monoculture agriculture.

But now we’re beginning to see the unintended consequence. Dr. Frank Uekotter, Professor of Environmental Humanities at the University of Birmingham, calls monoculturalism a “centuries long stumble.” He warns that it has developed its own momentum, ““Somehow that fledgling operation grew into a monster. We may have to cut our losses at some point, but monoculture has absorbed decades of huge investment and moving away from it will be akin to attempting a handbrake turn in a supertanker.”

We’re learning – probably too late – that nature never intended plants to be surrounded only by other plants of the same kind. Monocultures lead to higher rates of disease and the degradation of the environment. The most extreme example of this is how monocultures of African palm oil orchards are swallowing the biodiverse Amazon rain forest at an alarming rate. Sometimes, as Joni Mitchell reminds us, “You don’t know what you’ve got til it’s gone.”

The same could be true for the traditional workplace. I think Marissa Mayer was on to something. We are social animals and have evolved to share spaces with others of our species. There is a vast repertoire of evolved mechanisms and strategies that make us able to function in these environments. While a virtual workplace may be logical, we may be sacrificing something more ephemeral that lies buried in our humanness. We can’t see it because we’re not exactly sure what it is, but we’ll know it when we lose it.

Maybe it’s loyalty. A few weeks ago, the Wharton School of Business published an article entitled, “Is Workplace Loyalty Gone for Good?” We have all heard of the “Great Resignation.” Last year, the US had over 40 million people quit their jobs. The advent of the Virtual Workplace has also meant a virtual job market. Employees are in the driver’s seat. Everything is up for renegotiation. As the article said, “the modern workplace has become increasingly transactional.”

Maybe that’s a good thing. Maybe not. That’s the thing with unintended consequences. Only time will tell.

Minority Report Might Be Here — 30 Years Early

“Sometimes, in order to see the light, you have to risk the dark.”

Iris Hineman – 2002’s Minority Report

I don’t usually look to Hollywood for deep philosophical reflection, but today I’m making an exception. Steven Spielberg’s 2002 film Minority Report is balanced on some fascinating ground, ethically speaking. For me, it brought up a rather interesting question – could you get a clear enough picture of someone’s mental state through their social media feed that would allow you to predict pathological behavior? And – even if you could – should you?

If you’re not familiar with the movie, here is the background on this question. In the year 2054, there are three individuals that possess a psychic ability to see events in the future, primarily premeditated murders. These individuals are known at Precognitives, or Precogs. Their predictions are used to set up a PreCrime Division in Washington, DC, where suspects are arrested before they can commit the crime.

Our Social Media Persona

A persona is a social façade – a mask we don that portrays a role we play in our lives. For many of us that now includes the digital stage of social media. Here too we have created a persona, where we share the aspects of ourselves that we feel we need to put out there on our social media platform of choice.

What may surprise us, however, is that even though we supposedly have control over what we share, even that will tell a surprising amount about who we are – both intentionally and unintentionally. And, if those clues are troubling, does our society have a responsibility – or the right – to proactively reach out?

In a commentary published in the American Journal of Psychiatry, Dr. Shawn McNeil said of social media,

“Scientists should be able to harness the predictive potential of these technologies in identifying those most vulnerable. We should seek to understand the significance of a patient’s interaction with social media when taking a thorough history. Future research should focus on the development of advanced algorithms that can efficiently identify the highest-risk individuals.”

Dr. Shawn McNeil

Along this theme, a 2017 study (Liu & Campbell) found that where we fall in the so-called “Big Five” personality traits – neuroticism, extraversion, openness, agreeableness and conscientiousness – as well as the “Big Two” metatraits – plasticity and stability – can be a pretty accurate prediction of how we use social media.

But what if we flip this around?  If we just look at a person’s social media feed, could we tell what their personality traits and metatraits are with a reasonable degree of accuracy? Could we, for instance, assess their mental stability and pick up the warning signs that they might be on the verge of doing something destructive, either to themselves or to someone else? Following this logic, could we spot a potential crime before it happens?

Pathological Predictions

Police are already using social media to track suspects and find criminals. But this is typically applied after the crime has occurred. For instance, police departments regularly scan social media using facial recognition technology to track down suspects. They comb a suspect’s social media feeds to establish whereabouts and gather evidence. Of course, you can only scan social content that people are willing to share. But when these platforms are as ubiquitous as they are, it’s constantly astounding that people share as much as they do, even when they’re on the run from the law.

There are certainly ethical questions about mining social media content for law enforcement purposes. For example, facial recognition algorithms tend to have flaws when it comes to false positives with those of darker complexion, leading to racial profiling concerns. But at least this activity tries to stick with the spirit of the tenet that our justice system is built on: you are innocent until proven guilty.

There must be a temptation, however, to go down the same path as Minority Report and try to pre-empt crime – by identifying a “Precrime”.

Take a school shooting, for example. In the May 31 issue of Fortune, senior technology journalist Jeremy Kahn asked this question: “Could A.I. prevent another school shooting?” In the article, Kahn referenced a study where a team at the Cincinnati Children’s Hospital Medical Center used Artificial Intelligence software to analyze transcripts of teens who went through a preliminary interview with psychiatrists. The goal was to see how well the algorithm compared to more extensive assessments by trained psychiatrists to see if the subject had a propensity to commit violence. They found that assessments matched about 91% of the time.

I’ll restate that so the point hits home: An A.I. algorithm that scanned a preliminary assessment could match much more extensive assessments done by expert professionals 9 out of 10 times –  even without access to the extensive records and patient histories that the psychiatrists had at their disposal.

Let’s go one step further and connect those two dots: If social media content could be used to identify potentially pathological behaviors, and if an AI could then scan that content to predict whether those behaviors could lead to criminal activities, what do we do with that?

It puts us squarely on a very slippery down slope, but we have to acknowledge that we are getting very close to a point where technology forces us to ask a question we’ve never been able to ask before: “If we – with a reasonable degree of success – could prevent violent crimes that haven’t happened yet, should we?”

Sarcastic Much?

“Sarcasm is the lowest form of wit, but the highest form of intelligence.”

Oscar Wilde

I fear the death of sarcasm is nigh. The alarm bells started going when I saw a tweet from John Cleese that referenced a bit from “The Daily Show.”  In it, Trevor Noah used sarcasm to run circles around the logic of Supreme Court Justice Brett Kavanaugh, who had opined that Roe v. Wade should be overturned, essentially booting the question down to the state level to decide.

Against my better judgement, I started scrolling through the comments on the thread — and, within the first couple, found that many of those commenting had completely missed Noah’s point. They didn’t pick up on the sarcasm — at all. In fact, to say they missed the point is like saying Columbus “missed” India. They weren’t even in the same ocean. Perhaps not the same planet.

Sarcasm is my mother tongue. I am fluent in it. So I’m very comfortable with sarcasm. I tend to get nervous in overly sincere environments.

I find sarcasm requires almost a type of meta-cognition, where you have to be able to mentally separate the speaker’s intention from what they’re saying. If you can hold the two apart in your head, you can truly appreciate the art of sarcasm. It’s this finely balanced and recurrent series of contradictions — with tongue firmly placed in cheek — that makes sarcasm so potentially powerful. As used by Trevor Noah, it allows us to air out politically charged issues and consider them at a mental level at least one step removed from our emotional gut reactions.

As Oscar Wilde knew — judging by his quote at the beginning of the post — sarcasm can be a nasty form of humor, but it does require some brain work. It’s a bit of a mental puzzle, forcing us to twist an issue in our heads like a cognitive Rubik’s Cube, looking at it from different angles. Because of this, it’s not for everyone. Some people are just too earnest (again, with a nod to Mr. Wilde) to appreciate sarcasm.

The British excel at sarcasm. John Cleese is a high priest of sarcasm. That’s why I follow him on Twitter. Wilde, of course, turned sarcasm into art. But as Ricky Gervais (who has his own black belt in sarcasm) explains in this piece for Time, sarcasm — and, to be more expansive, all types of irony — have been built into the British psyche over many centuries. This isn’t necessarily true for Americans. 

“There’s a received wisdom in the U.K. that Americans don’t get irony. This is of course not true. But what is true is that they don’t use it all the time. It shows up in the smarter comedies but Americans don’t use it as much socially as Brits. We use it as liberally as prepositions in everyday speech. We tease our friends. We use sarcasm as a shield and a weapon. We avoid sincerity until it’s absolutely necessary. We mercilessly take the piss out of people we like or dislike basically. And ourselves. This is very important. Our brashness and swagger is laden with equal portions of self-deprecation. This is our license to hand it out.”

Ricky Gervais – Time, November 9, 2011

That was written just over a decade ago. I believe it’s even more true today. If you chose to use sarcasm in our age of fake news and social media, you do so at your peril. Here are three reasons why:

First, as Gervais points out, sarcasm doesn’t play equally across all cultures.  Americans — as one example — tend to be more sincere and, as such, take many things meant as sarcastic at face value. Sarcasm might hit home with a percentage of an U.S. audience, but it will go over a lot of American heads. It’s probably not a coincidence that many of those heads might be wearing MAGA hats.

Also, sarcasm can be fatally hamstrung by our TL;DR rush to scroll to the next thing. Sarcasm typically saves its payoff until the end. It intentionally creates a cognitive gap, and you have to be willing to stay with it to realize that someone is, in the words of Gervais, taking the “piss out of you.” Bail too early and you might never recognize it as sarcasm. I suspect more than a few of those who watched Trevor Noah’s piece didn’t stick through to the end before posting a comment.

Finally, and perhaps most importantly, social media tends to strip sarcasm of its context, leaving it hanging out there to be misinterpreted. If you are a regular watcher of “The Daily Show with Trevor Noah,” or “Last Week Tonight with John Oliver,” or even “Late Night with Seth Meyers” (who is one American that’s a master of sarcasm), you realize that sarcasm is part and parcel of it all. But when you repost any bit from any of these shows to social media, moving it beyond its typical audience, you have also removed all the warning signs that say “warning: sarcastic content ahead.” You are leaving the audience to their own devices to “get it.” And that almost never turns out well on social media.

You may say that this is all for the good. The world doesn’t really need more sarcasm. An academic study found that sarcastic messages can be more hurtful to the recipient than a sincere message. Sarcasm can cut deep, and because of this, it can lead to more interpersonal conflict.

But there’s another side to sarcasm. That same study also found that sarcasm can require us to be more creative. The mental mechanisms you use to understand sarcasm are the very same ones we need to use to be more thoughtful about important issues. It de-weaponizes these issues by using humor, while it also forces us to look at them in new ways.

Personally, I believe our world needs more Trevor Noahs, John Olivers and Seth Meyers. Sarcasm, used well, can make us a little smarter, a little more open-minded, and — believe it or not — a little more compassionate.