Does Social Media “Dumb Down” the Wisdom of Crowds?

We assume that democracy is the gold standard of sustainable political social contracts. And it’s hard to argue against that. As Winston Churchill said, “democracy is the worst form of government – except for all the others that have been tried.”

Democracy may not be perfect, but it works. Or, at least, it seems to work better than all the other options. Essentially, democracy depends on probability – on being right more often than we’re wrong.

At the very heart of democracy is the principle of majority rule. And that is based on something called Jury Theorem, put forward by the Marquis de Condorcet in his 1785 work, Essay on the Application of Analysis to the Probability of Majority Decisions. Essentially, it says that the probability of making the right decision increases when you average the decisions of as many people as possible. This was the basis of James Suroweicki’s 2004 book, The Wisdom of Crowds.

But here’s the thing about the wisdom of crowds – it only applies when those individual decisions are reached independently. Once we start influencing each other’s decision, that wisdom disappears. And that makes social psychologist Solomon Asch’s famous conformity experiments of 1951 a disturbingly significant fly in the ointment of democracy.

You’re probably all aware of the seminal study, but I’ll recap anyway. Asch gathered groups of people and showed them a card with three lines of obviously different lengths. Then he asked participants which line was the closest to the reference line. The answer was obvious – even a toddler can get this test right pretty much every time.

But unknown to the test subject, all the rest of the participants were “stooges” – actors paid to sometimes give an obviously incorrect answer. And when this happened, Asch was amazed to find that the test subjects often went against the evidence of their own eyes just to conform with the group. When wrong answers were given, a third of the subjects always conformed, 75% of the subjects conformed at least once, and only 25% stuck to the evidence in front of them and gave the right answer.

The results baffled Asch. The most interesting question to him was why this was happening. Were people making a decision to go against their better judgment – choosing to go with the crowd rather than what they were seeing with their own eyes? Or was something happening below the level of consciousness? This was something Solomon Asch wondered about right until his death in 1996. Unfortunately, he never had the means to explore the question further.

But, in 2005, a group of researchers at Emory University, led by Gregory Berns, did have a way. Here, Asch’s experiment was restaged, only this time participants were in a fMRI machine so Bern and his researchers could peak at what was actually happening in their brains. The results were staggering.

They found that conformity actually changes the way our brain works. It’s not that we change what we say to conform with what others are saying, despite what we see with our own eyes. What we see is changed by what others are saying.

If, Berns and his researchers reasoned, you were consciously making a decision to go against the evidence of your own eyes just to conform with the group, you should see activity in the frontal areas of our brain that are engaged in monitoring conflicts, planning and other higher-order mental activities.

But that isn’t what they found. In those participants that went along with obviously incorrect answers from the group, the parts of the brain that showed activity were only in the posterior parts of the brain – those that control spatial awareness and visual perception. There was no indication of an internal mental conflict. The brain was actually changing how it processed the information it was receiving from the eyes.

This is stunning. It means that conformity isn’t a conscious decision. Our desire to conform is wired so deeply in our brains, it actually changes how we perceive the world. We never have the chance to be objectively right, because we never realize we’re wrong.

But what about those that went resisted conformity and stuck to the evidence they were seeing with their own eyes? Here again, the results were fascinating. The researchers found that in these cases, they saw a spike of activity in the right amygdala and right caudate nucleus – areas involved in the processing of strong emotions, including fear, anger and anxiety. Those that stuck to the evidence of their own eyes had to overcome emotional hurdles to do so. In the published paper, the authors called this the “pain of independence.”

This study highlights a massively important limitation in the social contract of democracy. As technology increasingly imposes social conformity on our culture, we lose the ability to collectively make the right decision. Essentially, is shows that this effect not only erases the wisdom of crowds, but actively works against it by exacting an emotional price for being an independent thinker.

The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

The Physical Foundations of Friendship

It’s no secret that I worry about what the unintended consequences might be for us as we increasingly substitute a digital world for a physical one. What might happen to our society as we spend less time face-to-face with people and more time face-to-face with a screen?

Take friendship, for example. I have written before about how Facebook friends and real friends are not the same thing. A lot of this has to do with the mental work required to maintain a true friendship. This cognitive requirement led British anthropologist Robin Dunbar to come up with something called Dunbar’s Number – a rough rule-of-thumb that says we can’t really maintain a network of more than 150 friends, give or take a few.

Before you say, “I have way more friends on Facebook than that,” realize that I don’t care what your Facebook Friend count is. Mine numbers at least 3 times more than Dunbar’s 150 limit. But they are not all true friends. Many are just the result of me clicking a link on my laptop. It’s quick, it’s easy, and there is absolutely no requirement to put any skin in the game. Once clicked, I don’t have to do anything to maintain these friendships. They are just part of a digital tally that persists until I might click again, “unfriending” them. Nowhere is the ongoing physical friction that demands the maintenance required to keep a true friendship from slipping into entropy.

So I was wondering – what is that magical physical and mental alchemy that causes us to become friends with someone in the first place? When we share physical space with another human, what is the spark that causes us to want to get to know them better? Or – on the flip side – what are the red flags that cause us to head for the other end of the room to avoid talking to them? Fortunately, there is some science that has addressed those questions.

We become friends because of something in sociology call homophily – being like each other. In today’s world, that leads to some unfortunate social consequences, but in our evolutionary environment, it made sense. It has to do with kinship ties and what ethologist Richard Dawkins called The Selfish Gene. We want family to survive to pass on our genes. The best way to motivate us to protect others is to have an emotional bond to them. And it just so happens that family members tend to look somewhat alike. So we like – or love – others who are like us.

If we tie in the impact of geography over our history, we start to understand why this is so. Geography that restricted travel and led to inbreeding generally dictated a certain degree of genetic “sameness” in our tribe. It was a quick way to sort in-groups from out-groups. And in a bloodier, less politically correct world, this was a matter of survival.

But this geographic connection works both ways. Geographic restrictions lead to homophily, but repeated exposure to the same people also increases the odds that you’ll like them. In psychology, this is called mere-exposure effect.

In these two ways, the limitations of a physical world has a deep, deep impact on the nature of friendship. But let’s focus on the first for a moment. 

It appears we have built-in “friend detectors” that can actually sense genetic similarities. In a rather fascinating study, Nicholas Christakis and James Fowler found that friends are so alike genetically, they could actually be family. If you drill down to the individual building blocks of a gene at the nucleotide level, your friends are as alike genetically to you as your fourth cousin. As Christakis and Fowler say in their study, “friends may be a kind of ‘functional kin’.”

This shows how deeply friendships bonds are hardwired into us. Of course, this doesn’t happen equally across all genes. Evolution is nothing if not practical. For example, Christakis and Fowler found that specific systems do stay “heterophilic” (not alike) – such as our immune system. This makes sense. If you have a group of people who stay in close proximity to each other, it’s going to remain more resistant to epidemics if there is some variety in what they’re individually immune to. If everyone had exactly the same immunity profile, the group would be highly resistant to some bugs and completely vulnerable to others. It would be putting all your disease prevention eggs in one basket.

But in another example of extreme genetic practicality, how similar we smell to our friends can be determined genetically.  Think about it. Would you rather be close to people who generally smell the same, or those that smell different? It seems a little silly in today’s world of private homes and extreme hygiene, but when you’re sharing very close living quarters with others and there’s no such thing as showers and baths, how everyone smells becomes extremely important.

Christakis and Fowler found that our olfactory sensibilities tend to trend to the homophilic side between friends. In other words, the people we like smell alike. And this is important because of something called olfactory fatigue. We use smell as a difference detector. It warns us when something is not right. And our nose starts to ignore smells it gets used to, even offensive ones. It’s why you can’t smell your own typical body odor. Or, in another even less elegant example, it’s why your farts don’t stink as much as others. 

Given all this, it would make sense that if you had to spend time close to others, you would pick people who smelled like you. Your nose would automatically be less sensitive to their own smells. And that’s exactly what a new study from the Weizmann Institute of Science found. In the study, the scent signatures of complete strangers were sampled using an electronic sniffer called an eNose. Then the strangers were asked to engage in nonverbal social interactions in pairs. After, they were asked to rate each interaction based on how likely they would be to become friends with the person. The result? Based on their smells alone, the researchers were able to predict with 71% accuracy who would become friends.

The foundations of friendship run deep – down to the genetic building blocks that make us who we are. These foundations were built in a physical world over millions of years. They engage senses that evolved to help us experience that physical world. Those foundations are not going to disappear in the next decade or two, no matter how addictive Facebook or TikTok becomes. We can continue to layer technology over these foundations, but to deny them it to ignore human nature.

As the “Office” Goes, What May Go With It?

In 2017, Apple employees moved into the new Apple headquarters, called the Ring, in Cupertino, California. This was the last passion project of Steve Jobs, who personally made the pitch to Cupertino City Council just months before he passed away. And its design was personally overseen by Apple’s then Chief Design Office Jony Ive. The new headquarters were meant to give Apple’s Cupertino employees the ultimate “sense of place”. They were designed to be organic and flexible, evolving to continue to meet their needs.

Of course, no one saw a global pandemic in the future. COVID-19 drove almost all those employees to work from home. The massive campus sat empty. And now, as Apple tries to bring everyone back to the Ring, it seems what has evolved is the expectations of the employees, who have taken a hard left turn away from the very idea of “going to work.”

Just last month, Apple had to backtrack on its edict demanding that everyone start coming back to the office three days a week. A group which calls itself “Apple Together” published a letter asking for the company to embrace a hybrid work schedule that formalized a remote workplace. And one of Apple’s leading AI engineers, Ian Goodfellow, resigned in May because of Apple’s insistence on going back to the office.

Perhaps Apple’s Ring is just the most elegant example of a last-gasp concept tied to a generation that is rapidly fading from the office into retirement. The Ring could be the world’s biggest and most expensive anachronism. 

The Virtual Workplace debate is not new for Silicon Valley. Almost a decade ago, Marissa Mayer also issued a “Back to the Office” edict when she came from Google to take over the helm at Yahoo. A company memo laid out the logic:

“To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side. That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings. Speed and quality are often sacrificed when we work from home. We need to be one Yahoo!, and that starts with physically being together.”

Marissa Mayer, Yahoo Company Memo

The memo was not popular with Yahooligans. I was still making regular visits to the Valley back then and heard first-hand the grumblings from some of them. My own agency actually had a similar experience, albeit on a much smaller scale.

Over the past decade – until COVID – employees and employers have tentatively tested the realities of a remote workplace. But in the blink of an eye, the pandemic turned this ongoing experiment into the only option available. If businesses wanted to continue operating, they had to embrace working from home. And if employees wanted to keep their jobs, they had to make room on the dining room table for their laptop. Overnight, Zoom meetings and communicating through Slack became the new normal.

Sometimes, necessity is the mother of adoption. And with a 27 (and counting) month runway to get used to it, it appears that the virtual workplace is here to stay.

In some ways, the virtual office represents the unbundling of our worklife. Because our world was constrained by physical limitations of distance, we tended to deal with a holistic world. Everything came as a package that was assembled by proximity. We operated inside an ecosystem that shared the same physical space. This was true for almost everything in our lives, including our jobs. The workplace was a place, with physical and social properties that existed within that place.

But technology allows us to unbundle that experience. We can separate work from place. We pick and choose what seems to be the most important things we need to do our jobs and take it with us, free from the physical restraints that once kept us all in the same place in the same time. In that process, there are both intended and unintended consequences.

On the face of it, freeing our work from its physical constraints (when this is possible) makes all kinds of sense. For the employer, it eliminates the need for maintaining a location, along with the expense of doing so. And, when you can work anywhere, you can also recruit from anywhere, dramatically opening up the talent pool.

For the employee, it’s probably even more attractive. You can work on your schedule, giving you more flexibility to maintain a healthy work-life balance. Long and frustrating commutes are eliminated. Your home can be wherever you want to live, rather than where you have to live because of your job.

Like I said, when you look at all these intended consequences, a virtual workplace seems to be all upside, with little downside. However, the downsides are starting to show through the cracks created by the unintended consequences.

To me, this seems somewhat analogous to the introduction of monoculture agriculture. You could say this also represented the unbundling of farming for the sake of efficiency. Focusing on one crop in one place in a time made all kinds of sense. You could standardize planting, fertilizing, watering and harvesting based on what was best for the chosen crop. It allowed for the introduction of machinery, increasing yields and lowering costs. Small wonder that over the past 2 centuries – and especially since World War II – the world rushed to embrace monoculture agriculture.

But now we’re beginning to see the unintended consequence. Dr. Frank Uekotter, Professor of Environmental Humanities at the University of Birmingham, calls monoculturalism a “centuries long stumble.” He warns that it has developed its own momentum, ““Somehow that fledgling operation grew into a monster. We may have to cut our losses at some point, but monoculture has absorbed decades of huge investment and moving away from it will be akin to attempting a handbrake turn in a supertanker.”

We’re learning – probably too late – that nature never intended plants to be surrounded only by other plants of the same kind. Monocultures lead to higher rates of disease and the degradation of the environment. The most extreme example of this is how monocultures of African palm oil orchards are swallowing the biodiverse Amazon rain forest at an alarming rate. Sometimes, as Joni Mitchell reminds us, “You don’t know what you’ve got til it’s gone.”

The same could be true for the traditional workplace. I think Marissa Mayer was on to something. We are social animals and have evolved to share spaces with others of our species. There is a vast repertoire of evolved mechanisms and strategies that make us able to function in these environments. While a virtual workplace may be logical, we may be sacrificing something more ephemeral that lies buried in our humanness. We can’t see it because we’re not exactly sure what it is, but we’ll know it when we lose it.

Maybe it’s loyalty. A few weeks ago, the Wharton School of Business published an article entitled, “Is Workplace Loyalty Gone for Good?” We have all heard of the “Great Resignation.” Last year, the US had over 40 million people quit their jobs. The advent of the Virtual Workplace has also meant a virtual job market. Employees are in the driver’s seat. Everything is up for renegotiation. As the article said, “the modern workplace has become increasingly transactional.”

Maybe that’s a good thing. Maybe not. That’s the thing with unintended consequences. Only time will tell.

Minority Report Might Be Here — 30 Years Early

“Sometimes, in order to see the light, you have to risk the dark.”

Iris Hineman – 2002’s Minority Report

I don’t usually look to Hollywood for deep philosophical reflection, but today I’m making an exception. Steven Spielberg’s 2002 film Minority Report is balanced on some fascinating ground, ethically speaking. For me, it brought up a rather interesting question – could you get a clear enough picture of someone’s mental state through their social media feed that would allow you to predict pathological behavior? And – even if you could – should you?

If you’re not familiar with the movie, here is the background on this question. In the year 2054, there are three individuals that possess a psychic ability to see events in the future, primarily premeditated murders. These individuals are known at Precognitives, or Precogs. Their predictions are used to set up a PreCrime Division in Washington, DC, where suspects are arrested before they can commit the crime.

Our Social Media Persona

A persona is a social façade – a mask we don that portrays a role we play in our lives. For many of us that now includes the digital stage of social media. Here too we have created a persona, where we share the aspects of ourselves that we feel we need to put out there on our social media platform of choice.

What may surprise us, however, is that even though we supposedly have control over what we share, even that will tell a surprising amount about who we are – both intentionally and unintentionally. And, if those clues are troubling, does our society have a responsibility – or the right – to proactively reach out?

In a commentary published in the American Journal of Psychiatry, Dr. Shawn McNeil said of social media,

“Scientists should be able to harness the predictive potential of these technologies in identifying those most vulnerable. We should seek to understand the significance of a patient’s interaction with social media when taking a thorough history. Future research should focus on the development of advanced algorithms that can efficiently identify the highest-risk individuals.”

Dr. Shawn McNeil

Along this theme, a 2017 study (Liu & Campbell) found that where we fall in the so-called “Big Five” personality traits – neuroticism, extraversion, openness, agreeableness and conscientiousness – as well as the “Big Two” metatraits – plasticity and stability – can be a pretty accurate prediction of how we use social media.

But what if we flip this around?  If we just look at a person’s social media feed, could we tell what their personality traits and metatraits are with a reasonable degree of accuracy? Could we, for instance, assess their mental stability and pick up the warning signs that they might be on the verge of doing something destructive, either to themselves or to someone else? Following this logic, could we spot a potential crime before it happens?

Pathological Predictions

Police are already using social media to track suspects and find criminals. But this is typically applied after the crime has occurred. For instance, police departments regularly scan social media using facial recognition technology to track down suspects. They comb a suspect’s social media feeds to establish whereabouts and gather evidence. Of course, you can only scan social content that people are willing to share. But when these platforms are as ubiquitous as they are, it’s constantly astounding that people share as much as they do, even when they’re on the run from the law.

There are certainly ethical questions about mining social media content for law enforcement purposes. For example, facial recognition algorithms tend to have flaws when it comes to false positives with those of darker complexion, leading to racial profiling concerns. But at least this activity tries to stick with the spirit of the tenet that our justice system is built on: you are innocent until proven guilty.

There must be a temptation, however, to go down the same path as Minority Report and try to pre-empt crime – by identifying a “Precrime”.

Take a school shooting, for example. In the May 31 issue of Fortune, senior technology journalist Jeremy Kahn asked this question: “Could A.I. prevent another school shooting?” In the article, Kahn referenced a study where a team at the Cincinnati Children’s Hospital Medical Center used Artificial Intelligence software to analyze transcripts of teens who went through a preliminary interview with psychiatrists. The goal was to see how well the algorithm compared to more extensive assessments by trained psychiatrists to see if the subject had a propensity to commit violence. They found that assessments matched about 91% of the time.

I’ll restate that so the point hits home: An A.I. algorithm that scanned a preliminary assessment could match much more extensive assessments done by expert professionals 9 out of 10 times –  even without access to the extensive records and patient histories that the psychiatrists had at their disposal.

Let’s go one step further and connect those two dots: If social media content could be used to identify potentially pathological behaviors, and if an AI could then scan that content to predict whether those behaviors could lead to criminal activities, what do we do with that?

It puts us squarely on a very slippery down slope, but we have to acknowledge that we are getting very close to a point where technology forces us to ask a question we’ve never been able to ask before: “If we – with a reasonable degree of success – could prevent violent crimes that haven’t happened yet, should we?”

Memories Made by Media

If you said the year 1967 to me, the memory that would pop into my head would be of Haight-Ashbury (ground zero for the counterculture movement), hippies and the summer of love. In fact, that same memory would effectively stand in for the period 1967 to 1969. In my mind, those three years were variations on the theme of Woodstock, the iconic music festival of 1969.

But none of those are my memories. I was alive, but my own memories of that time are indistinct and fuzzy. I was only 6 that year and lived in Alberta, some 1300 miles from the intersection of Haight and Ashbury Streets, so I have discarded my own personal representative memories. The ones I have were all created by images that came via media.

The Swapping of Memories

This is an example of the two types of memories we have – personal or “lived” memories and collective memories. Collective memories are the memories we get from outside, either for other people or, in my example, from media. As we age, there tends to be a flow back and forth between these two types or memories, with one type coloring the other.

One group of academics proposed an hourglass model as a working metaphor to understand this continuous exchange of memories – with some flowing one way and others flowing the other.  Often, we’re not even aware of which type of memory we’re recalling, personal or collective. Our memories are notoriously bad at reflecting reality.

What is true, however, is that our personal memories and our collective memories tend to get all mixed up. The lower our confidence in our personal memories, the more we tend to rely on collective memories. For periods before we were born, we rely solely on images we borrow.

Iconic Memories

What is true for all memories, ours or the ones we borrow from others, is we put them through a process called “leveling and sharpening.” This is a type of memory consolidation where we throw out some of the detail that is not important to us – this is leveling – and exaggerate other details to make it more interesting – i.e. sharpening.

Take my borrowed memories of 1967, for example. There was a lot more happening in the world than whatever was happening in San Francisco during the Summer of Love, but I haven’t retained any of it in my representative memory of that year. For example, there was a military coup in Greece, the first successful human heart transplant, the creation of the Corporation for Public Broadcasting, a series of deadly tornadoes in Chicago and Typhoon Emma left 140,000 people homeless in the Philippines. But none of that made it into my memory of 1967.

We could call the memories we do keep as “iconic” – which simply means we chose symbols to represent a much bigger and more complex reality – like everything that happened in a 365 day stretch 5 and a half decades ago.

Mass Manufactured Memories

Something else happens when we swap our own personal memories for collective memories – we find much more commonality in our memories. The more removed we become from our own lived experiences, the more our memories become common property.

If I asked you to say the first thing that comes to mind about 2002, you would probably look back through your own personal memory store to see if there was anything there. Chances are it would be a significant event from your own life, and this would make it unique to you. If we had a group of 50 people in a room and I asked that question, I would probably end up with 50 different answers.

But if I asked that same group what the first thing is that comes to mind when I say the year 1967, we would find much more common ground. And that ground would probably be defined by how each of us identify ourselves. For some you might have the same iconic memory that I do – that of the Haight Ashbury and the Summer of Love. Others may have picked the Vietnam War as the iconic memory from that year. But I would venture to guess that in our group of 50, we would end up with only a handful of answers.

When Memories are Made of Media

I am taking this walk down Memory Lane because I want to highlight how much we rely on the media to supply our collective memories. This dependency is critical, because once media images are processed by us and become part of our collective memories, they hold tremendous sway over our beliefs. These memories become the foundation for how we make sense of the world.

This is true for all media, including social media. A study in 2018 (Birkner & Donk) found that “alternative realities” can be formed through social media to run counter to collective memories formed from mainstream media. Often, these collective memories formed through social media are polarized by nature and are adopted by outlier fringes to justify extreme beliefs and viewpoints. This shows that collective memories are not frozen in time but are malleable – continually being rewritten by different media platforms.

Like most things mediated by technology, collective memories are splintering into smaller and smaller groupings, just like the media that are instrumental in their formation.

Sarcastic Much?

“Sarcasm is the lowest form of wit, but the highest form of intelligence.”

Oscar Wilde

I fear the death of sarcasm is nigh. The alarm bells started going when I saw a tweet from John Cleese that referenced a bit from “The Daily Show.”  In it, Trevor Noah used sarcasm to run circles around the logic of Supreme Court Justice Brett Kavanaugh, who had opined that Roe v. Wade should be overturned, essentially booting the question down to the state level to decide.

Against my better judgement, I started scrolling through the comments on the thread — and, within the first couple, found that many of those commenting had completely missed Noah’s point. They didn’t pick up on the sarcasm — at all. In fact, to say they missed the point is like saying Columbus “missed” India. They weren’t even in the same ocean. Perhaps not the same planet.

Sarcasm is my mother tongue. I am fluent in it. So I’m very comfortable with sarcasm. I tend to get nervous in overly sincere environments.

I find sarcasm requires almost a type of meta-cognition, where you have to be able to mentally separate the speaker’s intention from what they’re saying. If you can hold the two apart in your head, you can truly appreciate the art of sarcasm. It’s this finely balanced and recurrent series of contradictions — with tongue firmly placed in cheek — that makes sarcasm so potentially powerful. As used by Trevor Noah, it allows us to air out politically charged issues and consider them at a mental level at least one step removed from our emotional gut reactions.

As Oscar Wilde knew — judging by his quote at the beginning of the post — sarcasm can be a nasty form of humor, but it does require some brain work. It’s a bit of a mental puzzle, forcing us to twist an issue in our heads like a cognitive Rubik’s Cube, looking at it from different angles. Because of this, it’s not for everyone. Some people are just too earnest (again, with a nod to Mr. Wilde) to appreciate sarcasm.

The British excel at sarcasm. John Cleese is a high priest of sarcasm. That’s why I follow him on Twitter. Wilde, of course, turned sarcasm into art. But as Ricky Gervais (who has his own black belt in sarcasm) explains in this piece for Time, sarcasm — and, to be more expansive, all types of irony — have been built into the British psyche over many centuries. This isn’t necessarily true for Americans. 

“There’s a received wisdom in the U.K. that Americans don’t get irony. This is of course not true. But what is true is that they don’t use it all the time. It shows up in the smarter comedies but Americans don’t use it as much socially as Brits. We use it as liberally as prepositions in everyday speech. We tease our friends. We use sarcasm as a shield and a weapon. We avoid sincerity until it’s absolutely necessary. We mercilessly take the piss out of people we like or dislike basically. And ourselves. This is very important. Our brashness and swagger is laden with equal portions of self-deprecation. This is our license to hand it out.”

Ricky Gervais – Time, November 9, 2011

That was written just over a decade ago. I believe it’s even more true today. If you chose to use sarcasm in our age of fake news and social media, you do so at your peril. Here are three reasons why:

First, as Gervais points out, sarcasm doesn’t play equally across all cultures.  Americans — as one example — tend to be more sincere and, as such, take many things meant as sarcastic at face value. Sarcasm might hit home with a percentage of an U.S. audience, but it will go over a lot of American heads. It’s probably not a coincidence that many of those heads might be wearing MAGA hats.

Also, sarcasm can be fatally hamstrung by our TL;DR rush to scroll to the next thing. Sarcasm typically saves its payoff until the end. It intentionally creates a cognitive gap, and you have to be willing to stay with it to realize that someone is, in the words of Gervais, taking the “piss out of you.” Bail too early and you might never recognize it as sarcasm. I suspect more than a few of those who watched Trevor Noah’s piece didn’t stick through to the end before posting a comment.

Finally, and perhaps most importantly, social media tends to strip sarcasm of its context, leaving it hanging out there to be misinterpreted. If you are a regular watcher of “The Daily Show with Trevor Noah,” or “Last Week Tonight with John Oliver,” or even “Late Night with Seth Meyers” (who is one American that’s a master of sarcasm), you realize that sarcasm is part and parcel of it all. But when you repost any bit from any of these shows to social media, moving it beyond its typical audience, you have also removed all the warning signs that say “warning: sarcastic content ahead.” You are leaving the audience to their own devices to “get it.” And that almost never turns out well on social media.

You may say that this is all for the good. The world doesn’t really need more sarcasm. An academic study found that sarcastic messages can be more hurtful to the recipient than a sincere message. Sarcasm can cut deep, and because of this, it can lead to more interpersonal conflict.

But there’s another side to sarcasm. That same study also found that sarcasm can require us to be more creative. The mental mechanisms you use to understand sarcasm are the very same ones we need to use to be more thoughtful about important issues. It de-weaponizes these issues by using humor, while it also forces us to look at them in new ways.

Personally, I believe our world needs more Trevor Noahs, John Olivers and Seth Meyers. Sarcasm, used well, can make us a little smarter, a little more open-minded, and — believe it or not — a little more compassionate.

Using Science for Selling: Sometimes Yes, Sometimes No

A recent study out of Ohio State University seems like one of those that the world really didn’t need. The researchers were exploring whether introducing science into the marketing would help sell chocolate chip cookies.

And to us who make a living in marketing, this is one of those things that might make us say “Duh, you needed research to tell us that? Of course you don’t use science to sell chocolate chip cookies!”

But bear with me, because if we keep asking why enough, we can come up with some answers that might surprise us.

So, what did the researchers learn? I quote,

“Specifically, since hedonic attributes are associated with warmth, the coldness associated with science is conceptually disfluent with the anticipated warmth of hedonic products and attributes, reducing product valuation.”

Ohio State Study

In other words – much simpler and fewer in number – science doesn’t help sell cookies. And it’s because our brains think differently about some things than other.

For example, a study published in the journal Computers in Human Behavior (Casado-Aranda, Sanchez-Fernandez and Garcia) found that when we’re exposed to “hedonic” ads – ads that appeal to pleasurable sensations – the parts of our brain that retrieve memories kicks in. This isn’t true when we see utilitarian ads. Predictably, we approach those ads as a problem to be solved and engage the parts of our brain that control working memory and the ability to focus our attention.

Essentially, these two advertising approaches take two different paths in our awareness, one takes the “thinking” path and one takes the “feeling” path. Or, as Nobel Laureate Daniel Kahneman would say, one takes the “thinking slow” path and one takes the “thinking fast” path.

Yet another study begins to show why this may be so. Let’s go back to chocolate chip cookies for a moment. When you smell a fresh baked cookie, it’s not just the sensory appeal “in the moment” that makes the cookie irresistible. It’s also the memories it brings back for you. We know that how things smell is a particularly effective way to trigger this connection with the past. Certain smells – like that of cookies just out of the oven – can be the shortest path between today and some childhood memory. These are called associative memories. And they’re a big part of “feeling” something rather than just “thinking” about it.

At the University of California – Irvine – Neuroscientists discovered a very specific type of neuron in our memory centers that oversee the creation of new associative memories. They’re called “fan cells” and it seems that these neurons are responsible for creating the link between new input and those emotion-inducing memories that we may have tucked away from our past. And – critically – it seems that dopamine is the key to linking the two. When our brains “smell” a potential reward, it kicks these fan cells into gear and our brain is bathed in the “warm fuzzies.” Lead research Kei Igarashi, said,

“We never expected that dopamine is involved in the memory circuit. However, when the evidence accumulated, it gradually became clear that dopamine is involved. These experiments were like a detective story for us, and we are excited about the results.”

Kei Igarashi – University of California – Irvine

Not surprisingly – as our first study found – introducing science into this whole process can be a bit of a buzz kill. It would be like inviting Bill Nye the Science Guy to teach you about quantum physics during your Saturday morning cuddle time.

All of this probably seems overwhelmingly academic to you. Selling something like chocolate chip cookies isn’t something that should take three different scientific studies and strapping several people inside a fMRI machine to explain. We should be able to rely on our guts, and our guts know that science has no place in a campaign built on an emotional appeal.

But there is a point to all this. Different marketing approaches are handled by different parts of the brain, and knowing that allows us to reinforce our marketing intuition with a better understanding of why we humans do the things we do.

Utilitarian appeals activate the parts of the brain that are front and center, the data crunching, evaluating and rational parts of our cognitive machinery.

Hedonic appeals probe the subterranean depths of our brains, unpacking memories and prodding emotions below the thresholds of us being conscious of the process. We respond viscerally – which literally means “from our guts”.

If we’re talking about selling chocolate chip cookies, we have moved about as far towards the hedonic end of the scale as we can. At the other end we would find something like motor oil – where scientific messaging such as “advanced formulation” or “proven engine protection” would be more persuasive. But almost all other products fall somewhere in between. They are a mix of hedonic and utilitarian factors. And we haven’t even factored in the most significant of all consumer considerations – risk and how to avoid it. Think how complex things would get in our brains if we were buying a new car!

Buying chocolate chip cookies might seem like a no brainer – because – well – it almost is. Beyond dosing our neural pathways with dopamine, our brains barely kick in when considering whether to grab a bag of Chips Ahoy on our next trip to the store. In fact, the last thing you want your brain to do when you’re craving chewy chocolate is to kick in. Then you would start considering things like caloric intake and how you should be cutting down on processed sugar. Chocolate chip cookies might be a no-brainer, but almost nothing else in the consumer world is that simple.

Marketing is relying more and more on data. But data is typically restricted to answering “who”, “what”, “when” and “where” questions. It’s studies like the ones I shared here that start to pick apart the “why” of marketing.

And when things get complex, asking “why” is exactly what we need to do.

Sensationalizing Scam Culture

We seem to be fascinated by bad behavior. Our popular culture is all agog with grifters and assholes. As TV Blog’s Adam Buckman wrote in March: “Two brand-new limited series premiering this week appear to be part of a growing trend in which some of recent history’s most notorious innovators and disruptors are getting the scripted-TV treatment.”

The two series Buckman was talking about were “Super Pumped: The Battle for Uber,” about Uber CEO Travis Kalanick, and “The Dropout,” about Theranos founder Elizabeth Holmes.

But those are just two examples from a bumper crop of shows about bad behavior. My streaming services are stuffed with stories of scammers. In addition to the two series Buckman mentioned, I just finished Shonda Rhimes’ Netflix series “Inventing Anna,” about Anna Sorokin, who posed as an heiress named Anna Delvey.

All these treatments tread a tight wire of moral judgement, where the examples are presented as antisocial, but in a wink-and-a-nod kind of way, where we not so secretly admire these behaviors. Much as the actions are harmful to well-being of the collective “we,” they do appeal to the selfishness and ambition of “me.”

Most of the examples given are rags to riches to retribution stores (Holmes was an exception with her upper-middle-class background). The sky-high ambitions of Kalanick Holmes and Sorokin were all eventually brought back down to earth. Sorokin and Holmes both ended up in prison, and Kalanick was ousted from the company he founded.

But with the subtlest of twists, they didn’t have to end this way. They could have been the story of almost any corporate America hustler who triumphed. With a little more substance and a little less scam, you could swap Elizabeth Holmes for Steve Jobs. They even dressed the same.

Obviously, scamming seems to sell. These people fascinate us. Part of the appeal is no doubt due a class conflict narrative: the scrappy hustler climbing the social ranks by whatever means possible. We love to watch “one of us” pull the wool over the eyes of the social elite.

In the case of Anna Sorokin, Laura Craik dissects our fascination in a piece published in the UK’s Evening Standard:

“The reason people are so obsessed with Sorokin is simple: she had the balls to pull off on a grand scale what so many people try and fail to pull off on a small one. To use a phrase popular on social media, Sorokin succeeded in living her best life — right down to the clothes she wore in court, chosen by a stylist. Like Jay Gatsby, she was a deeply flawed embodiment of The American Dream: a person from humble beginnings who rose to achieve wealth and social status. Only her wealth was borrowed and her social status was conferred via a chimera of untruths.”

Laura Craik – UK Evening Standard

This type of behavior is nothing new. It’s always been a part of us. In 1513, a Florentine bureaucrat named Niccolo Machiavelli gave it a name — actually, his name. In writing “The Prince,” he condoned bad behavior as long as the end goal was to elevate oneself. In a Machiavellian world, it’s always open season on suckers: “One who deceives will always find those who allow themselves to be deceived.”

For the past five centuries, Machiavellianism was always synonymous with evil. It was a recognized character flaw, described as “a personality trait that denotes cunningness, the ability to be manipulative, and a drive to use whatever means necessary to gain power. Machiavellianism is one of the traits that forms the Dark Triad, along with narcissism and psychopathy.”

Now, however, that stigma seems to be disappearing. In a culture obsessed with success, Machiavellianism becomes a justifiable means to an end, so much so that we’ve given this culture its own hashtag: #scamculture: “A scam culture is one in which scamming has not only lost its stigma but is also valorized. We rebrand scamming as ‘hustle,’ or the willingness to commodify all social ties, and this is because the ‘legitimate’ economy and the political system simply do not work for millions of Americans.”

It’s a culture that’s very much at home in Silicon Valley. The tech world is steeped in Machiavellianism. Its tenets are accepted — even encouraged — business practices in the Valley. “Fake it til you make it” is tech’s modus operandi. The example of Niccolo Machiavelli has gone from being a cautionary tale to a how-to manual.

But these predatory practices come at a price. Doing business this way destroys trust. And trust is still, by far, the best strategy for our mutual benefit. In behavioral economics, there’s something called “tit for tat,” which according to Wikipedia “posits that a person is more successful if they cooperate with another person. Implementing a tit-for-tat strategy occurs when one agent cooperates with another agent in the very first interaction and then mimics their subsequent moves. This strategy is based on the concepts of retaliation and altruism.”

In countless game theory simulations, tit for tat has proven to be the most successful strategy for long-term success. It assumes a default position of trust, only moving to retaliation if required.

Our society needs trust to function properly. In a New York Times op-ed entitled “Why We Need to Address Scam Culture,” Tressie McMillan Cottom writes,  

“Scams weaken our trust in social institutions, but their going mainstream — divorced from empathy for the victims or stigma for the perpetrators — means that we have accepted scams as institutions themselves.”

Tressie McMillan Cottom – NY Times

The reason that trust is more effective than scamming is that predatory practices are self-limiting. You can only be a predator if you have enough prey. In a purely Machiavellian world, trust disappears — and there are no easy marks to prey upon.

I am Generation Jones

I was born in 1961. I always thought that technically made me a baby boomer. But I recently discovered that I am, in fact, part of Generation Jones.

If you haven’t heard of that term (as I had not, until I read a post on it a few weeks ago) Generation Jones refers to people born from 1955 to 1964 — a cusp generation squeezed between the massive boomer block and Gen X.

That squares with me. I always somehow knew I wasn’t really a boomer, but I also knew I wasn’t Gen X. And now I know why. I, along with Barack Obama and Wayne Gretzky, was squarely in the middle of Generation Jones.

I always felt the long shadow of World War II defined baby boomers, but it didn’t define me. My childhood felt like eons removed from the war. Most of the more-traumatic wounds had healed by the time I was riding my trike through the relatively quiet suburban streets of Calgary, Alberta.

I didn’t appreciate the OK Boomer memes, not because I was the butt of them, but more because I didn’t really feel they applied to me. They didn’t hit me where I live. It was like I was winged by a shot meant for someone else.

OK Boomer digs didn’t really apply to my friends and contemporaries either, all of whom are also part of Generation Jones. For the most part, we’re trying to do our best dealing with climate change, racial inequality, more fluid gender identification and political polarization. We get it. Is there entitlement? Yeah, more than a little. But we’re trying.

And I also wasn’t part of Gen X. I wasn’t a latchkey kid. My parents didn’t obsess over the almighty dollar, so I didn’t feel a need to push back against it. My friends and I worked a zillion hours, because we were — admittedly — still materialistic. But it was a different kind of materialism, one edged with more than a little anxiety.

I hit the workforce in the early ‘80s, right in the middle of a worldwide recession. Generation Jones certainly wanted to get ahead, but we also wanted to keep our jobs, because if we lost them, there was no guarantee we’d find another.

When boomers were entering the workforce, through the 1970s, Canada’s unemployment rate hovered in the 6% to 8% range (U.S. numbers varied but roughly followed the same pattern). In 1982, the year I tried to start my career, it suddenly shot up to 13%. Through the ‘80s, as Gen X started to get their first jobs, it declined again to the 8% range. Generation Jones started looking for work just when a job was historically the hardest to find.

It wasn’t just the jobless rate. Interest rates also skyrocketed to historic levels in the early ‘80s. Again, using data from the Bank of Canada, their benchmark rate peaked at an astronomical 20.78% the same month I turned 20, in 1981. Not only couldn’t we find a job, we couldn’t have afforded credit even if we could get a job.

So yes, we were trying to keep up with the Joneses — this is where the name for our generation comes from, coined by social commentator Jonathon Pontell — but it wasn’t all about getting ahead. A lot of it was just trying to keep our heads above water.

We were a generation moving into adulthood at the beginning of HIV/Aids, Reaganomics, globalization and the mass deindustrialization of North American. All the social revolutions of the ‘60s and ‘70s had crystallized to the point where they now had real-world consequences. We were figuring out a world that seemed to be pivoting sharply.

As I said, I always felt that I was somewhat accidentally lodged between baby boomer and Gen X, wading my way through the transition.

Part of that transition involved the explosion of technology that became much more personal at the beginning of the 1980s.  To paraphrase Shakespeare in “Twelfth Night”: Some are born with technology, some achieve technology, and some have technology thrust upon them.

Generation Jones is in the last group.

True boomers could make the decision to ignore technology and drift through life just adopting what they absolutely had to. Gen X grew up with the rudiments of technology, making it more familiar territory for them. The leading edge of that generation started entering the workforce in the mid 80’s. Computers were becoming more common. The Motorala “brick” cellphone had debuted. Technology was becoming ubiquitous – unable to be ignored. 

But we were caught in between. We had to make a decision: Do we embrace technology, or do we fight against it? A lot of that decision depended on what we wanted to do for a living. Through the ‘80s, one by one, industries were being transformed by computers and digitalization.

Often, we of Generation Jones got into our first jobs working on the technology of yesterday — and very early in our careers, we were forced to adopt the technologies of tomorrow. Often, we of Generation Jones got into our first jobs working on the technology of yesterday and very early in our careers, we were forced to adopt the technologies of tomorrow.

I started as a radio copywriter in 1982, and my first ads were written on an IBM Selectric and produced by cutting and patching two-track audio tape together on reel-to-reel machine with razor blades and splicing tape. Just a few years later, I was writing on an Apple IIe, and ads were starting to be recorded digitally. That shift in technology happened just when our generation was beginning our careers.  Some of us went willingly, some of us went kicking and screaming.

This straddling two very different worlds seems to personify my generation. I think, with the hindsight of history, we will identify the early ‘80s as a period of significant transition in almost every aspect of our culture. Obviously, all generations had to navigate that transition, but for Generation Jones, that period just happened to coincide with what is typically the biggest transition for anyone in any generation: the passing from childhood to adulthood. It is during this time when we take the experiences of growing up and crystallize them into the foundations of who we will be for the rest of our lives.

For Generation Jones, those foundations had to be built on the fly, as the ground kept moving beneath our feet.