Crisis? What Crisis?

Never let a good crisis go to waste.

— Winston Churchill, approximately 1944

Crisis? What crisis?

— Supertramp album, 1975

I’ll be honest. I was struggling to finish this column. It was actually heading for the digital dustbin when I happened on MediaPost Editor in Chief Joe Mandese’s excellent commentary, “It’s Time For A Change, And By That, I Mean A Crisis.”

Much as I respect Joe, whose heart and head are definitely in the right place, I think we may have to agree to disagree. He says,

“What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist ‘change’ to an ‘our house is on fire’ crisis.”

Joe Mandese – Mediapost

But exactly how do you make people pay attention to an existential crisis? How do you communicate threat?

The problem may be that we can’t. It may simply not be possible.

That was crystallized in the scariest way possible recently on the U.K.’s GB News channel, where an anchor desperately tried to make light of the meteorologist’s dire predictions of potential fatalities ahead of an unprecedented heat wave in England.

Weather expert John Hammond issues a warning over the ‘extreme’ conditions expected next week – GB News – July 14, 2022

The Basics of Communication

There are typically four parts to any communication model: the sender, the message, the medium and the receiver. Joe’s post said the problem may be in the message — it hasn’t been urgent enough. I disagree. I think the problem is at the end of the chain, with the receiver. The message is already effective. It’s just not getting through.

In online course on business communications, Lumen Learning lists a number of potential barriers to communication. I’d like to focus on three that were mentioned: filtering, bias and lack of trust.

The first one is the big one, but the last two contribute. And they all lie on the receiving end of the communication model, with the receiver, who just doesn’t want to receive the message.

The problem, most of all, is one of entitlement.

I’m not pointing fingers — unless I’m pointing at myself. I live a privileged lifestyle. I don’t think I’ve let the message, with all its implications, fully get through to me, because to accept that message is unimaginably depressing and scary. I fully admit I’m filtering, because I feel overwhelmed. Climate change has gone from being an inconvenient truth to something we’re determined to ignore, even if it kills us.

If I count all the people whose lifestyle I have some understanding of, it’s aboiut a thousand people. I think an overwhelming majority of them get the massive implications of climate change.  Of all those people, I can count on the fingers of one hand (maybe two) those who have truly made substantive changes in their lifestyle to really address climate change. That’s — at best – .5% to 1% of everyone I know.

 I’m not judging. I haven’t made the changes required myself. Not really.

I have done all 10 of the UN’s suggestions of 10 ways you can help fight climate crisis to one extent or another. But I can’t help feeling that even doing all 10 is like peeing on a forest fire. Given the high stakes we’re talking about here, I really don’t feel I’m making a meaningful difference.  I haven’t sold either of my two vehicles, I haven’t stopped planning trips that involve air travel, or moved into a more energy- efficient house. I still eat red meat (although not as much as before).

The fact is, when a message is trying to tell us that our inevitable future means we’re going to have less than we have today, we will ignore that message. 

I get it. I truly do. I started and stopped this column several times because it depressed the hell out of me. But I am now determined to plow through to the end, so let’s talk about entitlement. We use this word a lot, especially lately. But what does it mean? It means we believe we have the right to the lifestyle we currently have.

But there’s no one to give us that right. Our lifestyle isn’t granted to us by anyone. If we live a good life, as I do, we like to think that it’s due to our hard work and wise choices. That’s why we’re entitled to everything we have. But if we rationally pick apart our success, we find that plain old dumb luck plays a bigger role than we’d like to admit. In my case, I was born a white, anglo male in one of the richest countries in the world. I came out of the womb with advantages most of the world can only dream of.

Entitlement is actually the result of a cognitive bias – or rather, a bundle of cognitive biases that include loss aversion and endowment effect. It’s a quirk in our mental wiring. It’s a mistaken belief – an illusion. I’m not owed the life I have. I have that life because of a convergence of lucky factors, and it appears my luck may be running out. There is no arbitrator of privilege that has granted North America the right to be the single biggest consumer of natural resources (per capita) in the world. But we seem prepared gamble our planet away on this mistaken belief about our own entitlement.

In psychology, there’s something called the Psychological Entitlement Scale. It measures the strength of this cognitive bias. A recent study showed just how strongly this was correlated with our ability to ignore messaging that we didn’t want to hear because we felt it interfered with our “rights.” In this case, the message was about health guidelines during COVID-19. And we all know how that turned out. Even something as ridiculously simple as wearing a face mask whipped up a shitstorm of entitlement. 

This is not a problem of messaging. We are not going to be persuaded to do the right thing.  We are being asked to give up too much.

Climate change can only be addressed by two things: legislation and a mobilization of the market. We cannot be left with the option of doing nothing — or too little — any longer.

We must be forced to be better. We need more massive omnibus bills, like the recent Manchin-Schumer deal, that mobilize industry and incentivize better behavior. I only hope my own Canadian government follows suit soon.

Much as I wish Joe Mandese were right that by turning up the intensity of the messaging, we could persuade consumers to really move the needle on the climate threat, I don’t think this would work. It’s not that we don’t know about climate change. It’s that we can’t let ourselves care, because our entitlement won’t let us.

The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

The Physical Foundations of Friendship

It’s no secret that I worry about what the unintended consequences might be for us as we increasingly substitute a digital world for a physical one. What might happen to our society as we spend less time face-to-face with people and more time face-to-face with a screen?

Take friendship, for example. I have written before about how Facebook friends and real friends are not the same thing. A lot of this has to do with the mental work required to maintain a true friendship. This cognitive requirement led British anthropologist Robin Dunbar to come up with something called Dunbar’s Number – a rough rule-of-thumb that says we can’t really maintain a network of more than 150 friends, give or take a few.

Before you say, “I have way more friends on Facebook than that,” realize that I don’t care what your Facebook Friend count is. Mine numbers at least 3 times more than Dunbar’s 150 limit. But they are not all true friends. Many are just the result of me clicking a link on my laptop. It’s quick, it’s easy, and there is absolutely no requirement to put any skin in the game. Once clicked, I don’t have to do anything to maintain these friendships. They are just part of a digital tally that persists until I might click again, “unfriending” them. Nowhere is the ongoing physical friction that demands the maintenance required to keep a true friendship from slipping into entropy.

So I was wondering – what is that magical physical and mental alchemy that causes us to become friends with someone in the first place? When we share physical space with another human, what is the spark that causes us to want to get to know them better? Or – on the flip side – what are the red flags that cause us to head for the other end of the room to avoid talking to them? Fortunately, there is some science that has addressed those questions.

We become friends because of something in sociology call homophily – being like each other. In today’s world, that leads to some unfortunate social consequences, but in our evolutionary environment, it made sense. It has to do with kinship ties and what ethologist Richard Dawkins called The Selfish Gene. We want family to survive to pass on our genes. The best way to motivate us to protect others is to have an emotional bond to them. And it just so happens that family members tend to look somewhat alike. So we like – or love – others who are like us.

If we tie in the impact of geography over our history, we start to understand why this is so. Geography that restricted travel and led to inbreeding generally dictated a certain degree of genetic “sameness” in our tribe. It was a quick way to sort in-groups from out-groups. And in a bloodier, less politically correct world, this was a matter of survival.

But this geographic connection works both ways. Geographic restrictions lead to homophily, but repeated exposure to the same people also increases the odds that you’ll like them. In psychology, this is called mere-exposure effect.

In these two ways, the limitations of a physical world has a deep, deep impact on the nature of friendship. But let’s focus on the first for a moment. 

It appears we have built-in “friend detectors” that can actually sense genetic similarities. In a rather fascinating study, Nicholas Christakis and James Fowler found that friends are so alike genetically, they could actually be family. If you drill down to the individual building blocks of a gene at the nucleotide level, your friends are as alike genetically to you as your fourth cousin. As Christakis and Fowler say in their study, “friends may be a kind of ‘functional kin’.”

This shows how deeply friendships bonds are hardwired into us. Of course, this doesn’t happen equally across all genes. Evolution is nothing if not practical. For example, Christakis and Fowler found that specific systems do stay “heterophilic” (not alike) – such as our immune system. This makes sense. If you have a group of people who stay in close proximity to each other, it’s going to remain more resistant to epidemics if there is some variety in what they’re individually immune to. If everyone had exactly the same immunity profile, the group would be highly resistant to some bugs and completely vulnerable to others. It would be putting all your disease prevention eggs in one basket.

But in another example of extreme genetic practicality, how similar we smell to our friends can be determined genetically.  Think about it. Would you rather be close to people who generally smell the same, or those that smell different? It seems a little silly in today’s world of private homes and extreme hygiene, but when you’re sharing very close living quarters with others and there’s no such thing as showers and baths, how everyone smells becomes extremely important.

Christakis and Fowler found that our olfactory sensibilities tend to trend to the homophilic side between friends. In other words, the people we like smell alike. And this is important because of something called olfactory fatigue. We use smell as a difference detector. It warns us when something is not right. And our nose starts to ignore smells it gets used to, even offensive ones. It’s why you can’t smell your own typical body odor. Or, in another even less elegant example, it’s why your farts don’t stink as much as others. 

Given all this, it would make sense that if you had to spend time close to others, you would pick people who smelled like you. Your nose would automatically be less sensitive to their own smells. And that’s exactly what a new study from the Weizmann Institute of Science found. In the study, the scent signatures of complete strangers were sampled using an electronic sniffer called an eNose. Then the strangers were asked to engage in nonverbal social interactions in pairs. After, they were asked to rate each interaction based on how likely they would be to become friends with the person. The result? Based on their smells alone, the researchers were able to predict with 71% accuracy who would become friends.

The foundations of friendship run deep – down to the genetic building blocks that make us who we are. These foundations were built in a physical world over millions of years. They engage senses that evolved to help us experience that physical world. Those foundations are not going to disappear in the next decade or two, no matter how addictive Facebook or TikTok becomes. We can continue to layer technology over these foundations, but to deny them it to ignore human nature.

As the “Office” Goes, What May Go With It?

In 2017, Apple employees moved into the new Apple headquarters, called the Ring, in Cupertino, California. This was the last passion project of Steve Jobs, who personally made the pitch to Cupertino City Council just months before he passed away. And its design was personally overseen by Apple’s then Chief Design Office Jony Ive. The new headquarters were meant to give Apple’s Cupertino employees the ultimate “sense of place”. They were designed to be organic and flexible, evolving to continue to meet their needs.

Of course, no one saw a global pandemic in the future. COVID-19 drove almost all those employees to work from home. The massive campus sat empty. And now, as Apple tries to bring everyone back to the Ring, it seems what has evolved is the expectations of the employees, who have taken a hard left turn away from the very idea of “going to work.”

Just last month, Apple had to backtrack on its edict demanding that everyone start coming back to the office three days a week. A group which calls itself “Apple Together” published a letter asking for the company to embrace a hybrid work schedule that formalized a remote workplace. And one of Apple’s leading AI engineers, Ian Goodfellow, resigned in May because of Apple’s insistence on going back to the office.

Perhaps Apple’s Ring is just the most elegant example of a last-gasp concept tied to a generation that is rapidly fading from the office into retirement. The Ring could be the world’s biggest and most expensive anachronism. 

The Virtual Workplace debate is not new for Silicon Valley. Almost a decade ago, Marissa Mayer also issued a “Back to the Office” edict when she came from Google to take over the helm at Yahoo. A company memo laid out the logic:

“To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side. That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings. Speed and quality are often sacrificed when we work from home. We need to be one Yahoo!, and that starts with physically being together.”

Marissa Mayer, Yahoo Company Memo

The memo was not popular with Yahooligans. I was still making regular visits to the Valley back then and heard first-hand the grumblings from some of them. My own agency actually had a similar experience, albeit on a much smaller scale.

Over the past decade – until COVID – employees and employers have tentatively tested the realities of a remote workplace. But in the blink of an eye, the pandemic turned this ongoing experiment into the only option available. If businesses wanted to continue operating, they had to embrace working from home. And if employees wanted to keep their jobs, they had to make room on the dining room table for their laptop. Overnight, Zoom meetings and communicating through Slack became the new normal.

Sometimes, necessity is the mother of adoption. And with a 27 (and counting) month runway to get used to it, it appears that the virtual workplace is here to stay.

In some ways, the virtual office represents the unbundling of our worklife. Because our world was constrained by physical limitations of distance, we tended to deal with a holistic world. Everything came as a package that was assembled by proximity. We operated inside an ecosystem that shared the same physical space. This was true for almost everything in our lives, including our jobs. The workplace was a place, with physical and social properties that existed within that place.

But technology allows us to unbundle that experience. We can separate work from place. We pick and choose what seems to be the most important things we need to do our jobs and take it with us, free from the physical restraints that once kept us all in the same place in the same time. In that process, there are both intended and unintended consequences.

On the face of it, freeing our work from its physical constraints (when this is possible) makes all kinds of sense. For the employer, it eliminates the need for maintaining a location, along with the expense of doing so. And, when you can work anywhere, you can also recruit from anywhere, dramatically opening up the talent pool.

For the employee, it’s probably even more attractive. You can work on your schedule, giving you more flexibility to maintain a healthy work-life balance. Long and frustrating commutes are eliminated. Your home can be wherever you want to live, rather than where you have to live because of your job.

Like I said, when you look at all these intended consequences, a virtual workplace seems to be all upside, with little downside. However, the downsides are starting to show through the cracks created by the unintended consequences.

To me, this seems somewhat analogous to the introduction of monoculture agriculture. You could say this also represented the unbundling of farming for the sake of efficiency. Focusing on one crop in one place in a time made all kinds of sense. You could standardize planting, fertilizing, watering and harvesting based on what was best for the chosen crop. It allowed for the introduction of machinery, increasing yields and lowering costs. Small wonder that over the past 2 centuries – and especially since World War II – the world rushed to embrace monoculture agriculture.

But now we’re beginning to see the unintended consequence. Dr. Frank Uekotter, Professor of Environmental Humanities at the University of Birmingham, calls monoculturalism a “centuries long stumble.” He warns that it has developed its own momentum, ““Somehow that fledgling operation grew into a monster. We may have to cut our losses at some point, but monoculture has absorbed decades of huge investment and moving away from it will be akin to attempting a handbrake turn in a supertanker.”

We’re learning – probably too late – that nature never intended plants to be surrounded only by other plants of the same kind. Monocultures lead to higher rates of disease and the degradation of the environment. The most extreme example of this is how monocultures of African palm oil orchards are swallowing the biodiverse Amazon rain forest at an alarming rate. Sometimes, as Joni Mitchell reminds us, “You don’t know what you’ve got til it’s gone.”

The same could be true for the traditional workplace. I think Marissa Mayer was on to something. We are social animals and have evolved to share spaces with others of our species. There is a vast repertoire of evolved mechanisms and strategies that make us able to function in these environments. While a virtual workplace may be logical, we may be sacrificing something more ephemeral that lies buried in our humanness. We can’t see it because we’re not exactly sure what it is, but we’ll know it when we lose it.

Maybe it’s loyalty. A few weeks ago, the Wharton School of Business published an article entitled, “Is Workplace Loyalty Gone for Good?” We have all heard of the “Great Resignation.” Last year, the US had over 40 million people quit their jobs. The advent of the Virtual Workplace has also meant a virtual job market. Employees are in the driver’s seat. Everything is up for renegotiation. As the article said, “the modern workplace has become increasingly transactional.”

Maybe that’s a good thing. Maybe not. That’s the thing with unintended consequences. Only time will tell.

Minority Report Might Be Here — 30 Years Early

“Sometimes, in order to see the light, you have to risk the dark.”

Iris Hineman – 2002’s Minority Report

I don’t usually look to Hollywood for deep philosophical reflection, but today I’m making an exception. Steven Spielberg’s 2002 film Minority Report is balanced on some fascinating ground, ethically speaking. For me, it brought up a rather interesting question – could you get a clear enough picture of someone’s mental state through their social media feed that would allow you to predict pathological behavior? And – even if you could – should you?

If you’re not familiar with the movie, here is the background on this question. In the year 2054, there are three individuals that possess a psychic ability to see events in the future, primarily premeditated murders. These individuals are known at Precognitives, or Precogs. Their predictions are used to set up a PreCrime Division in Washington, DC, where suspects are arrested before they can commit the crime.

Our Social Media Persona

A persona is a social façade – a mask we don that portrays a role we play in our lives. For many of us that now includes the digital stage of social media. Here too we have created a persona, where we share the aspects of ourselves that we feel we need to put out there on our social media platform of choice.

What may surprise us, however, is that even though we supposedly have control over what we share, even that will tell a surprising amount about who we are – both intentionally and unintentionally. And, if those clues are troubling, does our society have a responsibility – or the right – to proactively reach out?

In a commentary published in the American Journal of Psychiatry, Dr. Shawn McNeil said of social media,

“Scientists should be able to harness the predictive potential of these technologies in identifying those most vulnerable. We should seek to understand the significance of a patient’s interaction with social media when taking a thorough history. Future research should focus on the development of advanced algorithms that can efficiently identify the highest-risk individuals.”

Dr. Shawn McNeil

Along this theme, a 2017 study (Liu & Campbell) found that where we fall in the so-called “Big Five” personality traits – neuroticism, extraversion, openness, agreeableness and conscientiousness – as well as the “Big Two” metatraits – plasticity and stability – can be a pretty accurate prediction of how we use social media.

But what if we flip this around?  If we just look at a person’s social media feed, could we tell what their personality traits and metatraits are with a reasonable degree of accuracy? Could we, for instance, assess their mental stability and pick up the warning signs that they might be on the verge of doing something destructive, either to themselves or to someone else? Following this logic, could we spot a potential crime before it happens?

Pathological Predictions

Police are already using social media to track suspects and find criminals. But this is typically applied after the crime has occurred. For instance, police departments regularly scan social media using facial recognition technology to track down suspects. They comb a suspect’s social media feeds to establish whereabouts and gather evidence. Of course, you can only scan social content that people are willing to share. But when these platforms are as ubiquitous as they are, it’s constantly astounding that people share as much as they do, even when they’re on the run from the law.

There are certainly ethical questions about mining social media content for law enforcement purposes. For example, facial recognition algorithms tend to have flaws when it comes to false positives with those of darker complexion, leading to racial profiling concerns. But at least this activity tries to stick with the spirit of the tenet that our justice system is built on: you are innocent until proven guilty.

There must be a temptation, however, to go down the same path as Minority Report and try to pre-empt crime – by identifying a “Precrime”.

Take a school shooting, for example. In the May 31 issue of Fortune, senior technology journalist Jeremy Kahn asked this question: “Could A.I. prevent another school shooting?” In the article, Kahn referenced a study where a team at the Cincinnati Children’s Hospital Medical Center used Artificial Intelligence software to analyze transcripts of teens who went through a preliminary interview with psychiatrists. The goal was to see how well the algorithm compared to more extensive assessments by trained psychiatrists to see if the subject had a propensity to commit violence. They found that assessments matched about 91% of the time.

I’ll restate that so the point hits home: An A.I. algorithm that scanned a preliminary assessment could match much more extensive assessments done by expert professionals 9 out of 10 times –  even without access to the extensive records and patient histories that the psychiatrists had at their disposal.

Let’s go one step further and connect those two dots: If social media content could be used to identify potentially pathological behaviors, and if an AI could then scan that content to predict whether those behaviors could lead to criminal activities, what do we do with that?

It puts us squarely on a very slippery down slope, but we have to acknowledge that we are getting very close to a point where technology forces us to ask a question we’ve never been able to ask before: “If we – with a reasonable degree of success – could prevent violent crimes that haven’t happened yet, should we?”

Putting a Label on It

We know that news can be toxic. The state of affairs is so bad that many of the media sources we rely on for information have been demonstrated to be extremely harmful to our society. Misinformation, in its many forms, leads to polarization, the destruction of democracy, the engendering of hate and the devaluing of social capital. It is – quite likely – one of the most destructive forces we face today.

To make matters worse, a study conducted by Ben Lyons from the University of Utah found that we’re terrible at spotting misinformation, yet many of us think we can’t be fooled. Seventy five percent of us overestimate our ability to spot fake news by as much at 22 percentile points. And the more overconfident we are, the more likely we are to share false news.

Given the toxic effects of unreliable news reporting, it was only natural that – sooner or later – someone would come up with the logical idea of putting a warning label on it. And that’s exactly what NewsGuard does. Using “trained journalists” to review the most popular news platforms (they say the cover 95% of our news source engagement) they give each source a badge, ranging from green to red, showing its reliability. In a recent report, they highlighted some of the U.S.’s biggest misinformation culprits (NewsMax.com, TheGatewayPundit.com and the Federalist.com) and some of the sources that are most reliable (MSNBC.com, NYTimes.com, WashingtonPost.com and NPR.com).

But here’s the question. Just because you slap a warning label on toxic news sources, will it have any effect? That’s exactly what a group of researchers at New York University’s Center for Social Media and Politics wanted to find out. And the answer is both yes and no.

Kevin Aslett, lead author of the paper, said,

“While our study shows that, overall, credibility ratings have no discernible effect on misperceptions or online news consumption behavior of the average user, our findings suggest that the heaviest consumers of misinformation — those who rely on low-credibility sites — may move toward higher-quality sources when presented with news reliability ratings.”

Kevin Aslett, NYU Center for Social Media and Politics

This is interesting. In essence, this study is saying that if you run into the odd unreliable news source and you see a warning label, it will probably have no effect. But if you make a steady diet of unreliable news and see warning label after warning label, it may eventually sink in and cause you to improve your sources for news consumption. This seems to indicate warning labels might have a cumulative effect. The more you’re exposed to them, the more effective they become.

We are literally of two minds – one driven by ration and one by emotion. Warning labels try to appeal to one mind, but our likelihood to ignore them comes from our other mind. The effectiveness of these labels depends on which mind is in the driver’s seat. There is a wide spectrum of circumstances that may bring you face to face with a warning label and the effectiveness of that label may depend on some sort of cognitive “Russian roulette” – a game of odds to determine if the label will impact you. If this is the case, it makes sense that the more you see a warning label, the greater the odds that – at least one time – you might be of a mind to pay attention to it.

Up in Smoke

This might help explain the so-so track record of warning labels in other arenas. Probably the longest trial run of warning labels has been on cigarette packages. The United States started requiring these labels in 1966. In 2001, my own country – Canada – was the first country in the world to introduce graphic warning labels; huge and horrible pictures of the effects of smoking plastered across every pack of smokes.

This past week, we in Canada went one better. Again, we’re going to be the first country in the world to require warning labels on each and every cigarette. Apparently, our government has bought into the exposure effect of warning labels – more is better.

It seems to be working. In 1965 the smoking rate in Canada was 50%. In 2020 it was 13%.

But a recent study (Strong, Pierce, Pulvers et al) showed that if smokers aren’t ready to quit, warning labels may have “decreased positive perceptions of cigarettes associated with branded cigarette packs but without clearly increasing health concerns. They also increased quitting cognitions but did not affect either cigarette cessation or consumption levels.”

Like I said – just because you get through to one mind doesn’t mean you’ll have any luck with the other.

Side Effects May Include….

Perhaps the most interesting case of warnings in the consumer marketplace are with prescription drugs. Because the United States is one of the few places in the world (New Zealand is the other one) that can advertise prescription drugs direct to the consumer, the Food and Drug Administration has mandated that ads must include a fair balance of rewards and risks. Advertisers being advertisers, the rewards take up much of the ad, with sunlight infused shots of people enjoying life thanks to the miracles of the drug in question. But, at the end, there is a laundry list of side effects read in a voice over, typically at breakneck pace in a deadly monotone.

It’s this example that highlights perhaps the main issue with warning labels; they require a calculation of risk vs reward. If this wasn’t true, we wouldn’t need a warning label. Nobody needs to tell us not to drink battery acid. That’s all risk and no reward. If there’s a label on it, it’s probably on something we want to do but know we shouldn’t.

A study of the effectiveness of these warnings in DTC prescription ads found they become less effective because of something called argument dilution effect. Ads that only include the worst side effects are more effective than ads that include every potential side effect, even the minor ones. Hence the laundry list. If a drug could cause both sudden heart attacks and minor skin rashes, our mind tends to let these things cancel each other out.

This effect is an example of the heuristic nature of our risk vs reward decision making. It needs to operate quickly, so it relies on the irrational, instinctive part of our neural circuitry. We don’t take the time to weigh everything logically – we make a gut call. Marketers know the science behind this and continually use it to their advantage.

Warning labels are an easy legislative fix to try to plug this imperfectly human loophole. It seems to make sense, but it doesn’t really address the underlying factors. Given enough time and enough exposure, they can shift behaviors, but we shouldn’t rely on them too much.

Memories Made by Media

If you said the year 1967 to me, the memory that would pop into my head would be of Haight-Ashbury (ground zero for the counterculture movement), hippies and the summer of love. In fact, that same memory would effectively stand in for the period 1967 to 1969. In my mind, those three years were variations on the theme of Woodstock, the iconic music festival of 1969.

But none of those are my memories. I was alive, but my own memories of that time are indistinct and fuzzy. I was only 6 that year and lived in Alberta, some 1300 miles from the intersection of Haight and Ashbury Streets, so I have discarded my own personal representative memories. The ones I have were all created by images that came via media.

The Swapping of Memories

This is an example of the two types of memories we have – personal or “lived” memories and collective memories. Collective memories are the memories we get from outside, either for other people or, in my example, from media. As we age, there tends to be a flow back and forth between these two types or memories, with one type coloring the other.

One group of academics proposed an hourglass model as a working metaphor to understand this continuous exchange of memories – with some flowing one way and others flowing the other.  Often, we’re not even aware of which type of memory we’re recalling, personal or collective. Our memories are notoriously bad at reflecting reality.

What is true, however, is that our personal memories and our collective memories tend to get all mixed up. The lower our confidence in our personal memories, the more we tend to rely on collective memories. For periods before we were born, we rely solely on images we borrow.

Iconic Memories

What is true for all memories, ours or the ones we borrow from others, is we put them through a process called “leveling and sharpening.” This is a type of memory consolidation where we throw out some of the detail that is not important to us – this is leveling – and exaggerate other details to make it more interesting – i.e. sharpening.

Take my borrowed memories of 1967, for example. There was a lot more happening in the world than whatever was happening in San Francisco during the Summer of Love, but I haven’t retained any of it in my representative memory of that year. For example, there was a military coup in Greece, the first successful human heart transplant, the creation of the Corporation for Public Broadcasting, a series of deadly tornadoes in Chicago and Typhoon Emma left 140,000 people homeless in the Philippines. But none of that made it into my memory of 1967.

We could call the memories we do keep as “iconic” – which simply means we chose symbols to represent a much bigger and more complex reality – like everything that happened in a 365 day stretch 5 and a half decades ago.

Mass Manufactured Memories

Something else happens when we swap our own personal memories for collective memories – we find much more commonality in our memories. The more removed we become from our own lived experiences, the more our memories become common property.

If I asked you to say the first thing that comes to mind about 2002, you would probably look back through your own personal memory store to see if there was anything there. Chances are it would be a significant event from your own life, and this would make it unique to you. If we had a group of 50 people in a room and I asked that question, I would probably end up with 50 different answers.

But if I asked that same group what the first thing is that comes to mind when I say the year 1967, we would find much more common ground. And that ground would probably be defined by how each of us identify ourselves. For some you might have the same iconic memory that I do – that of the Haight Ashbury and the Summer of Love. Others may have picked the Vietnam War as the iconic memory from that year. But I would venture to guess that in our group of 50, we would end up with only a handful of answers.

When Memories are Made of Media

I am taking this walk down Memory Lane because I want to highlight how much we rely on the media to supply our collective memories. This dependency is critical, because once media images are processed by us and become part of our collective memories, they hold tremendous sway over our beliefs. These memories become the foundation for how we make sense of the world.

This is true for all media, including social media. A study in 2018 (Birkner & Donk) found that “alternative realities” can be formed through social media to run counter to collective memories formed from mainstream media. Often, these collective memories formed through social media are polarized by nature and are adopted by outlier fringes to justify extreme beliefs and viewpoints. This shows that collective memories are not frozen in time but are malleable – continually being rewritten by different media platforms.

Like most things mediated by technology, collective memories are splintering into smaller and smaller groupings, just like the media that are instrumental in their formation.

Sarcastic Much?

“Sarcasm is the lowest form of wit, but the highest form of intelligence.”

Oscar Wilde

I fear the death of sarcasm is nigh. The alarm bells started going when I saw a tweet from John Cleese that referenced a bit from “The Daily Show.”  In it, Trevor Noah used sarcasm to run circles around the logic of Supreme Court Justice Brett Kavanaugh, who had opined that Roe v. Wade should be overturned, essentially booting the question down to the state level to decide.

Against my better judgement, I started scrolling through the comments on the thread — and, within the first couple, found that many of those commenting had completely missed Noah’s point. They didn’t pick up on the sarcasm — at all. In fact, to say they missed the point is like saying Columbus “missed” India. They weren’t even in the same ocean. Perhaps not the same planet.

Sarcasm is my mother tongue. I am fluent in it. So I’m very comfortable with sarcasm. I tend to get nervous in overly sincere environments.

I find sarcasm requires almost a type of meta-cognition, where you have to be able to mentally separate the speaker’s intention from what they’re saying. If you can hold the two apart in your head, you can truly appreciate the art of sarcasm. It’s this finely balanced and recurrent series of contradictions — with tongue firmly placed in cheek — that makes sarcasm so potentially powerful. As used by Trevor Noah, it allows us to air out politically charged issues and consider them at a mental level at least one step removed from our emotional gut reactions.

As Oscar Wilde knew — judging by his quote at the beginning of the post — sarcasm can be a nasty form of humor, but it does require some brain work. It’s a bit of a mental puzzle, forcing us to twist an issue in our heads like a cognitive Rubik’s Cube, looking at it from different angles. Because of this, it’s not for everyone. Some people are just too earnest (again, with a nod to Mr. Wilde) to appreciate sarcasm.

The British excel at sarcasm. John Cleese is a high priest of sarcasm. That’s why I follow him on Twitter. Wilde, of course, turned sarcasm into art. But as Ricky Gervais (who has his own black belt in sarcasm) explains in this piece for Time, sarcasm — and, to be more expansive, all types of irony — have been built into the British psyche over many centuries. This isn’t necessarily true for Americans. 

“There’s a received wisdom in the U.K. that Americans don’t get irony. This is of course not true. But what is true is that they don’t use it all the time. It shows up in the smarter comedies but Americans don’t use it as much socially as Brits. We use it as liberally as prepositions in everyday speech. We tease our friends. We use sarcasm as a shield and a weapon. We avoid sincerity until it’s absolutely necessary. We mercilessly take the piss out of people we like or dislike basically. And ourselves. This is very important. Our brashness and swagger is laden with equal portions of self-deprecation. This is our license to hand it out.”

Ricky Gervais – Time, November 9, 2011

That was written just over a decade ago. I believe it’s even more true today. If you chose to use sarcasm in our age of fake news and social media, you do so at your peril. Here are three reasons why:

First, as Gervais points out, sarcasm doesn’t play equally across all cultures.  Americans — as one example — tend to be more sincere and, as such, take many things meant as sarcastic at face value. Sarcasm might hit home with a percentage of an U.S. audience, but it will go over a lot of American heads. It’s probably not a coincidence that many of those heads might be wearing MAGA hats.

Also, sarcasm can be fatally hamstrung by our TL;DR rush to scroll to the next thing. Sarcasm typically saves its payoff until the end. It intentionally creates a cognitive gap, and you have to be willing to stay with it to realize that someone is, in the words of Gervais, taking the “piss out of you.” Bail too early and you might never recognize it as sarcasm. I suspect more than a few of those who watched Trevor Noah’s piece didn’t stick through to the end before posting a comment.

Finally, and perhaps most importantly, social media tends to strip sarcasm of its context, leaving it hanging out there to be misinterpreted. If you are a regular watcher of “The Daily Show with Trevor Noah,” or “Last Week Tonight with John Oliver,” or even “Late Night with Seth Meyers” (who is one American that’s a master of sarcasm), you realize that sarcasm is part and parcel of it all. But when you repost any bit from any of these shows to social media, moving it beyond its typical audience, you have also removed all the warning signs that say “warning: sarcastic content ahead.” You are leaving the audience to their own devices to “get it.” And that almost never turns out well on social media.

You may say that this is all for the good. The world doesn’t really need more sarcasm. An academic study found that sarcastic messages can be more hurtful to the recipient than a sincere message. Sarcasm can cut deep, and because of this, it can lead to more interpersonal conflict.

But there’s another side to sarcasm. That same study also found that sarcasm can require us to be more creative. The mental mechanisms you use to understand sarcasm are the very same ones we need to use to be more thoughtful about important issues. It de-weaponizes these issues by using humor, while it also forces us to look at them in new ways.

Personally, I believe our world needs more Trevor Noahs, John Olivers and Seth Meyers. Sarcasm, used well, can make us a little smarter, a little more open-minded, and — believe it or not — a little more compassionate.

Using Science for Selling: Sometimes Yes, Sometimes No

A recent study out of Ohio State University seems like one of those that the world really didn’t need. The researchers were exploring whether introducing science into the marketing would help sell chocolate chip cookies.

And to us who make a living in marketing, this is one of those things that might make us say “Duh, you needed research to tell us that? Of course you don’t use science to sell chocolate chip cookies!”

But bear with me, because if we keep asking why enough, we can come up with some answers that might surprise us.

So, what did the researchers learn? I quote,

“Specifically, since hedonic attributes are associated with warmth, the coldness associated with science is conceptually disfluent with the anticipated warmth of hedonic products and attributes, reducing product valuation.”

Ohio State Study

In other words – much simpler and fewer in number – science doesn’t help sell cookies. And it’s because our brains think differently about some things than other.

For example, a study published in the journal Computers in Human Behavior (Casado-Aranda, Sanchez-Fernandez and Garcia) found that when we’re exposed to “hedonic” ads – ads that appeal to pleasurable sensations – the parts of our brain that retrieve memories kicks in. This isn’t true when we see utilitarian ads. Predictably, we approach those ads as a problem to be solved and engage the parts of our brain that control working memory and the ability to focus our attention.

Essentially, these two advertising approaches take two different paths in our awareness, one takes the “thinking” path and one takes the “feeling” path. Or, as Nobel Laureate Daniel Kahneman would say, one takes the “thinking slow” path and one takes the “thinking fast” path.

Yet another study begins to show why this may be so. Let’s go back to chocolate chip cookies for a moment. When you smell a fresh baked cookie, it’s not just the sensory appeal “in the moment” that makes the cookie irresistible. It’s also the memories it brings back for you. We know that how things smell is a particularly effective way to trigger this connection with the past. Certain smells – like that of cookies just out of the oven – can be the shortest path between today and some childhood memory. These are called associative memories. And they’re a big part of “feeling” something rather than just “thinking” about it.

At the University of California – Irvine – Neuroscientists discovered a very specific type of neuron in our memory centers that oversee the creation of new associative memories. They’re called “fan cells” and it seems that these neurons are responsible for creating the link between new input and those emotion-inducing memories that we may have tucked away from our past. And – critically – it seems that dopamine is the key to linking the two. When our brains “smell” a potential reward, it kicks these fan cells into gear and our brain is bathed in the “warm fuzzies.” Lead research Kei Igarashi, said,

“We never expected that dopamine is involved in the memory circuit. However, when the evidence accumulated, it gradually became clear that dopamine is involved. These experiments were like a detective story for us, and we are excited about the results.”

Kei Igarashi – University of California – Irvine

Not surprisingly – as our first study found – introducing science into this whole process can be a bit of a buzz kill. It would be like inviting Bill Nye the Science Guy to teach you about quantum physics during your Saturday morning cuddle time.

All of this probably seems overwhelmingly academic to you. Selling something like chocolate chip cookies isn’t something that should take three different scientific studies and strapping several people inside a fMRI machine to explain. We should be able to rely on our guts, and our guts know that science has no place in a campaign built on an emotional appeal.

But there is a point to all this. Different marketing approaches are handled by different parts of the brain, and knowing that allows us to reinforce our marketing intuition with a better understanding of why we humans do the things we do.

Utilitarian appeals activate the parts of the brain that are front and center, the data crunching, evaluating and rational parts of our cognitive machinery.

Hedonic appeals probe the subterranean depths of our brains, unpacking memories and prodding emotions below the thresholds of us being conscious of the process. We respond viscerally – which literally means “from our guts”.

If we’re talking about selling chocolate chip cookies, we have moved about as far towards the hedonic end of the scale as we can. At the other end we would find something like motor oil – where scientific messaging such as “advanced formulation” or “proven engine protection” would be more persuasive. But almost all other products fall somewhere in between. They are a mix of hedonic and utilitarian factors. And we haven’t even factored in the most significant of all consumer considerations – risk and how to avoid it. Think how complex things would get in our brains if we were buying a new car!

Buying chocolate chip cookies might seem like a no brainer – because – well – it almost is. Beyond dosing our neural pathways with dopamine, our brains barely kick in when considering whether to grab a bag of Chips Ahoy on our next trip to the store. In fact, the last thing you want your brain to do when you’re craving chewy chocolate is to kick in. Then you would start considering things like caloric intake and how you should be cutting down on processed sugar. Chocolate chip cookies might be a no-brainer, but almost nothing else in the consumer world is that simple.

Marketing is relying more and more on data. But data is typically restricted to answering “who”, “what”, “when” and “where” questions. It’s studies like the ones I shared here that start to pick apart the “why” of marketing.

And when things get complex, asking “why” is exactly what we need to do.

Sensationalizing Scam Culture

We seem to be fascinated by bad behavior. Our popular culture is all agog with grifters and assholes. As TV Blog’s Adam Buckman wrote in March: “Two brand-new limited series premiering this week appear to be part of a growing trend in which some of recent history’s most notorious innovators and disruptors are getting the scripted-TV treatment.”

The two series Buckman was talking about were “Super Pumped: The Battle for Uber,” about Uber CEO Travis Kalanick, and “The Dropout,” about Theranos founder Elizabeth Holmes.

But those are just two examples from a bumper crop of shows about bad behavior. My streaming services are stuffed with stories of scammers. In addition to the two series Buckman mentioned, I just finished Shonda Rhimes’ Netflix series “Inventing Anna,” about Anna Sorokin, who posed as an heiress named Anna Delvey.

All these treatments tread a tight wire of moral judgement, where the examples are presented as antisocial, but in a wink-and-a-nod kind of way, where we not so secretly admire these behaviors. Much as the actions are harmful to well-being of the collective “we,” they do appeal to the selfishness and ambition of “me.”

Most of the examples given are rags to riches to retribution stores (Holmes was an exception with her upper-middle-class background). The sky-high ambitions of Kalanick Holmes and Sorokin were all eventually brought back down to earth. Sorokin and Holmes both ended up in prison, and Kalanick was ousted from the company he founded.

But with the subtlest of twists, they didn’t have to end this way. They could have been the story of almost any corporate America hustler who triumphed. With a little more substance and a little less scam, you could swap Elizabeth Holmes for Steve Jobs. They even dressed the same.

Obviously, scamming seems to sell. These people fascinate us. Part of the appeal is no doubt due a class conflict narrative: the scrappy hustler climbing the social ranks by whatever means possible. We love to watch “one of us” pull the wool over the eyes of the social elite.

In the case of Anna Sorokin, Laura Craik dissects our fascination in a piece published in the UK’s Evening Standard:

“The reason people are so obsessed with Sorokin is simple: she had the balls to pull off on a grand scale what so many people try and fail to pull off on a small one. To use a phrase popular on social media, Sorokin succeeded in living her best life — right down to the clothes she wore in court, chosen by a stylist. Like Jay Gatsby, she was a deeply flawed embodiment of The American Dream: a person from humble beginnings who rose to achieve wealth and social status. Only her wealth was borrowed and her social status was conferred via a chimera of untruths.”

Laura Craik – UK Evening Standard

This type of behavior is nothing new. It’s always been a part of us. In 1513, a Florentine bureaucrat named Niccolo Machiavelli gave it a name — actually, his name. In writing “The Prince,” he condoned bad behavior as long as the end goal was to elevate oneself. In a Machiavellian world, it’s always open season on suckers: “One who deceives will always find those who allow themselves to be deceived.”

For the past five centuries, Machiavellianism was always synonymous with evil. It was a recognized character flaw, described as “a personality trait that denotes cunningness, the ability to be manipulative, and a drive to use whatever means necessary to gain power. Machiavellianism is one of the traits that forms the Dark Triad, along with narcissism and psychopathy.”

Now, however, that stigma seems to be disappearing. In a culture obsessed with success, Machiavellianism becomes a justifiable means to an end, so much so that we’ve given this culture its own hashtag: #scamculture: “A scam culture is one in which scamming has not only lost its stigma but is also valorized. We rebrand scamming as ‘hustle,’ or the willingness to commodify all social ties, and this is because the ‘legitimate’ economy and the political system simply do not work for millions of Americans.”

It’s a culture that’s very much at home in Silicon Valley. The tech world is steeped in Machiavellianism. Its tenets are accepted — even encouraged — business practices in the Valley. “Fake it til you make it” is tech’s modus operandi. The example of Niccolo Machiavelli has gone from being a cautionary tale to a how-to manual.

But these predatory practices come at a price. Doing business this way destroys trust. And trust is still, by far, the best strategy for our mutual benefit. In behavioral economics, there’s something called “tit for tat,” which according to Wikipedia “posits that a person is more successful if they cooperate with another person. Implementing a tit-for-tat strategy occurs when one agent cooperates with another agent in the very first interaction and then mimics their subsequent moves. This strategy is based on the concepts of retaliation and altruism.”

In countless game theory simulations, tit for tat has proven to be the most successful strategy for long-term success. It assumes a default position of trust, only moving to retaliation if required.

Our society needs trust to function properly. In a New York Times op-ed entitled “Why We Need to Address Scam Culture,” Tressie McMillan Cottom writes,  

“Scams weaken our trust in social institutions, but their going mainstream — divorced from empathy for the victims or stigma for the perpetrators — means that we have accepted scams as institutions themselves.”

Tressie McMillan Cottom – NY Times

The reason that trust is more effective than scamming is that predatory practices are self-limiting. You can only be a predator if you have enough prey. In a purely Machiavellian world, trust disappears — and there are no easy marks to prey upon.