As the “Office” Goes, What May Go With It?

In 2017, Apple employees moved into the new Apple headquarters, called the Ring, in Cupertino, California. This was the last passion project of Steve Jobs, who personally made the pitch to Cupertino City Council just months before he passed away. And its design was personally overseen by Apple’s then Chief Design Office Jony Ive. The new headquarters were meant to give Apple’s Cupertino employees the ultimate “sense of place”. They were designed to be organic and flexible, evolving to continue to meet their needs.

Of course, no one saw a global pandemic in the future. COVID-19 drove almost all those employees to work from home. The massive campus sat empty. And now, as Apple tries to bring everyone back to the Ring, it seems what has evolved is the expectations of the employees, who have taken a hard left turn away from the very idea of “going to work.”

Just last month, Apple had to backtrack on its edict demanding that everyone start coming back to the office three days a week. A group which calls itself “Apple Together” published a letter asking for the company to embrace a hybrid work schedule that formalized a remote workplace. And one of Apple’s leading AI engineers, Ian Goodfellow, resigned in May because of Apple’s insistence on going back to the office.

Perhaps Apple’s Ring is just the most elegant example of a last-gasp concept tied to a generation that is rapidly fading from the office into retirement. The Ring could be the world’s biggest and most expensive anachronism. 

The Virtual Workplace debate is not new for Silicon Valley. Almost a decade ago, Marissa Mayer also issued a “Back to the Office” edict when she came from Google to take over the helm at Yahoo. A company memo laid out the logic:

“To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side. That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings. Speed and quality are often sacrificed when we work from home. We need to be one Yahoo!, and that starts with physically being together.”

Marissa Mayer, Yahoo Company Memo

The memo was not popular with Yahooligans. I was still making regular visits to the Valley back then and heard first-hand the grumblings from some of them. My own agency actually had a similar experience, albeit on a much smaller scale.

Over the past decade – until COVID – employees and employers have tentatively tested the realities of a remote workplace. But in the blink of an eye, the pandemic turned this ongoing experiment into the only option available. If businesses wanted to continue operating, they had to embrace working from home. And if employees wanted to keep their jobs, they had to make room on the dining room table for their laptop. Overnight, Zoom meetings and communicating through Slack became the new normal.

Sometimes, necessity is the mother of adoption. And with a 27 (and counting) month runway to get used to it, it appears that the virtual workplace is here to stay.

In some ways, the virtual office represents the unbundling of our worklife. Because our world was constrained by physical limitations of distance, we tended to deal with a holistic world. Everything came as a package that was assembled by proximity. We operated inside an ecosystem that shared the same physical space. This was true for almost everything in our lives, including our jobs. The workplace was a place, with physical and social properties that existed within that place.

But technology allows us to unbundle that experience. We can separate work from place. We pick and choose what seems to be the most important things we need to do our jobs and take it with us, free from the physical restraints that once kept us all in the same place in the same time. In that process, there are both intended and unintended consequences.

On the face of it, freeing our work from its physical constraints (when this is possible) makes all kinds of sense. For the employer, it eliminates the need for maintaining a location, along with the expense of doing so. And, when you can work anywhere, you can also recruit from anywhere, dramatically opening up the talent pool.

For the employee, it’s probably even more attractive. You can work on your schedule, giving you more flexibility to maintain a healthy work-life balance. Long and frustrating commutes are eliminated. Your home can be wherever you want to live, rather than where you have to live because of your job.

Like I said, when you look at all these intended consequences, a virtual workplace seems to be all upside, with little downside. However, the downsides are starting to show through the cracks created by the unintended consequences.

To me, this seems somewhat analogous to the introduction of monoculture agriculture. You could say this also represented the unbundling of farming for the sake of efficiency. Focusing on one crop in one place in a time made all kinds of sense. You could standardize planting, fertilizing, watering and harvesting based on what was best for the chosen crop. It allowed for the introduction of machinery, increasing yields and lowering costs. Small wonder that over the past 2 centuries – and especially since World War II – the world rushed to embrace monoculture agriculture.

But now we’re beginning to see the unintended consequence. Dr. Frank Uekotter, Professor of Environmental Humanities at the University of Birmingham, calls monoculturalism a “centuries long stumble.” He warns that it has developed its own momentum, ““Somehow that fledgling operation grew into a monster. We may have to cut our losses at some point, but monoculture has absorbed decades of huge investment and moving away from it will be akin to attempting a handbrake turn in a supertanker.”

We’re learning – probably too late – that nature never intended plants to be surrounded only by other plants of the same kind. Monocultures lead to higher rates of disease and the degradation of the environment. The most extreme example of this is how monocultures of African palm oil orchards are swallowing the biodiverse Amazon rain forest at an alarming rate. Sometimes, as Joni Mitchell reminds us, “You don’t know what you’ve got til it’s gone.”

The same could be true for the traditional workplace. I think Marissa Mayer was on to something. We are social animals and have evolved to share spaces with others of our species. There is a vast repertoire of evolved mechanisms and strategies that make us able to function in these environments. While a virtual workplace may be logical, we may be sacrificing something more ephemeral that lies buried in our humanness. We can’t see it because we’re not exactly sure what it is, but we’ll know it when we lose it.

Maybe it’s loyalty. A few weeks ago, the Wharton School of Business published an article entitled, “Is Workplace Loyalty Gone for Good?” We have all heard of the “Great Resignation.” Last year, the US had over 40 million people quit their jobs. The advent of the Virtual Workplace has also meant a virtual job market. Employees are in the driver’s seat. Everything is up for renegotiation. As the article said, “the modern workplace has become increasingly transactional.”

Maybe that’s a good thing. Maybe not. That’s the thing with unintended consequences. Only time will tell.

Minority Report Might Be Here — 30 Years Early

“Sometimes, in order to see the light, you have to risk the dark.”

Iris Hineman – 2002’s Minority Report

I don’t usually look to Hollywood for deep philosophical reflection, but today I’m making an exception. Steven Spielberg’s 2002 film Minority Report is balanced on some fascinating ground, ethically speaking. For me, it brought up a rather interesting question – could you get a clear enough picture of someone’s mental state through their social media feed that would allow you to predict pathological behavior? And – even if you could – should you?

If you’re not familiar with the movie, here is the background on this question. In the year 2054, there are three individuals that possess a psychic ability to see events in the future, primarily premeditated murders. These individuals are known at Precognitives, or Precogs. Their predictions are used to set up a PreCrime Division in Washington, DC, where suspects are arrested before they can commit the crime.

Our Social Media Persona

A persona is a social façade – a mask we don that portrays a role we play in our lives. For many of us that now includes the digital stage of social media. Here too we have created a persona, where we share the aspects of ourselves that we feel we need to put out there on our social media platform of choice.

What may surprise us, however, is that even though we supposedly have control over what we share, even that will tell a surprising amount about who we are – both intentionally and unintentionally. And, if those clues are troubling, does our society have a responsibility – or the right – to proactively reach out?

In a commentary published in the American Journal of Psychiatry, Dr. Shawn McNeil said of social media,

“Scientists should be able to harness the predictive potential of these technologies in identifying those most vulnerable. We should seek to understand the significance of a patient’s interaction with social media when taking a thorough history. Future research should focus on the development of advanced algorithms that can efficiently identify the highest-risk individuals.”

Dr. Shawn McNeil

Along this theme, a 2017 study (Liu & Campbell) found that where we fall in the so-called “Big Five” personality traits – neuroticism, extraversion, openness, agreeableness and conscientiousness – as well as the “Big Two” metatraits – plasticity and stability – can be a pretty accurate prediction of how we use social media.

But what if we flip this around?  If we just look at a person’s social media feed, could we tell what their personality traits and metatraits are with a reasonable degree of accuracy? Could we, for instance, assess their mental stability and pick up the warning signs that they might be on the verge of doing something destructive, either to themselves or to someone else? Following this logic, could we spot a potential crime before it happens?

Pathological Predictions

Police are already using social media to track suspects and find criminals. But this is typically applied after the crime has occurred. For instance, police departments regularly scan social media using facial recognition technology to track down suspects. They comb a suspect’s social media feeds to establish whereabouts and gather evidence. Of course, you can only scan social content that people are willing to share. But when these platforms are as ubiquitous as they are, it’s constantly astounding that people share as much as they do, even when they’re on the run from the law.

There are certainly ethical questions about mining social media content for law enforcement purposes. For example, facial recognition algorithms tend to have flaws when it comes to false positives with those of darker complexion, leading to racial profiling concerns. But at least this activity tries to stick with the spirit of the tenet that our justice system is built on: you are innocent until proven guilty.

There must be a temptation, however, to go down the same path as Minority Report and try to pre-empt crime – by identifying a “Precrime”.

Take a school shooting, for example. In the May 31 issue of Fortune, senior technology journalist Jeremy Kahn asked this question: “Could A.I. prevent another school shooting?” In the article, Kahn referenced a study where a team at the Cincinnati Children’s Hospital Medical Center used Artificial Intelligence software to analyze transcripts of teens who went through a preliminary interview with psychiatrists. The goal was to see how well the algorithm compared to more extensive assessments by trained psychiatrists to see if the subject had a propensity to commit violence. They found that assessments matched about 91% of the time.

I’ll restate that so the point hits home: An A.I. algorithm that scanned a preliminary assessment could match much more extensive assessments done by expert professionals 9 out of 10 times –  even without access to the extensive records and patient histories that the psychiatrists had at their disposal.

Let’s go one step further and connect those two dots: If social media content could be used to identify potentially pathological behaviors, and if an AI could then scan that content to predict whether those behaviors could lead to criminal activities, what do we do with that?

It puts us squarely on a very slippery down slope, but we have to acknowledge that we are getting very close to a point where technology forces us to ask a question we’ve never been able to ask before: “If we – with a reasonable degree of success – could prevent violent crimes that haven’t happened yet, should we?”

Putting a Label on It

We know that news can be toxic. The state of affairs is so bad that many of the media sources we rely on for information have been demonstrated to be extremely harmful to our society. Misinformation, in its many forms, leads to polarization, the destruction of democracy, the engendering of hate and the devaluing of social capital. It is – quite likely – one of the most destructive forces we face today.

To make matters worse, a study conducted by Ben Lyons from the University of Utah found that we’re terrible at spotting misinformation, yet many of us think we can’t be fooled. Seventy five percent of us overestimate our ability to spot fake news by as much at 22 percentile points. And the more overconfident we are, the more likely we are to share false news.

Given the toxic effects of unreliable news reporting, it was only natural that – sooner or later – someone would come up with the logical idea of putting a warning label on it. And that’s exactly what NewsGuard does. Using “trained journalists” to review the most popular news platforms (they say the cover 95% of our news source engagement) they give each source a badge, ranging from green to red, showing its reliability. In a recent report, they highlighted some of the U.S.’s biggest misinformation culprits (NewsMax.com, TheGatewayPundit.com and the Federalist.com) and some of the sources that are most reliable (MSNBC.com, NYTimes.com, WashingtonPost.com and NPR.com).

But here’s the question. Just because you slap a warning label on toxic news sources, will it have any effect? That’s exactly what a group of researchers at New York University’s Center for Social Media and Politics wanted to find out. And the answer is both yes and no.

Kevin Aslett, lead author of the paper, said,

“While our study shows that, overall, credibility ratings have no discernible effect on misperceptions or online news consumption behavior of the average user, our findings suggest that the heaviest consumers of misinformation — those who rely on low-credibility sites — may move toward higher-quality sources when presented with news reliability ratings.”

Kevin Aslett, NYU Center for Social Media and Politics

This is interesting. In essence, this study is saying that if you run into the odd unreliable news source and you see a warning label, it will probably have no effect. But if you make a steady diet of unreliable news and see warning label after warning label, it may eventually sink in and cause you to improve your sources for news consumption. This seems to indicate warning labels might have a cumulative effect. The more you’re exposed to them, the more effective they become.

We are literally of two minds – one driven by ration and one by emotion. Warning labels try to appeal to one mind, but our likelihood to ignore them comes from our other mind. The effectiveness of these labels depends on which mind is in the driver’s seat. There is a wide spectrum of circumstances that may bring you face to face with a warning label and the effectiveness of that label may depend on some sort of cognitive “Russian roulette” – a game of odds to determine if the label will impact you. If this is the case, it makes sense that the more you see a warning label, the greater the odds that – at least one time – you might be of a mind to pay attention to it.

Up in Smoke

This might help explain the so-so track record of warning labels in other arenas. Probably the longest trial run of warning labels has been on cigarette packages. The United States started requiring these labels in 1966. In 2001, my own country – Canada – was the first country in the world to introduce graphic warning labels; huge and horrible pictures of the effects of smoking plastered across every pack of smokes.

This past week, we in Canada went one better. Again, we’re going to be the first country in the world to require warning labels on each and every cigarette. Apparently, our government has bought into the exposure effect of warning labels – more is better.

It seems to be working. In 1965 the smoking rate in Canada was 50%. In 2020 it was 13%.

But a recent study (Strong, Pierce, Pulvers et al) showed that if smokers aren’t ready to quit, warning labels may have “decreased positive perceptions of cigarettes associated with branded cigarette packs but without clearly increasing health concerns. They also increased quitting cognitions but did not affect either cigarette cessation or consumption levels.”

Like I said – just because you get through to one mind doesn’t mean you’ll have any luck with the other.

Side Effects May Include….

Perhaps the most interesting case of warnings in the consumer marketplace are with prescription drugs. Because the United States is one of the few places in the world (New Zealand is the other one) that can advertise prescription drugs direct to the consumer, the Food and Drug Administration has mandated that ads must include a fair balance of rewards and risks. Advertisers being advertisers, the rewards take up much of the ad, with sunlight infused shots of people enjoying life thanks to the miracles of the drug in question. But, at the end, there is a laundry list of side effects read in a voice over, typically at breakneck pace in a deadly monotone.

It’s this example that highlights perhaps the main issue with warning labels; they require a calculation of risk vs reward. If this wasn’t true, we wouldn’t need a warning label. Nobody needs to tell us not to drink battery acid. That’s all risk and no reward. If there’s a label on it, it’s probably on something we want to do but know we shouldn’t.

A study of the effectiveness of these warnings in DTC prescription ads found they become less effective because of something called argument dilution effect. Ads that only include the worst side effects are more effective than ads that include every potential side effect, even the minor ones. Hence the laundry list. If a drug could cause both sudden heart attacks and minor skin rashes, our mind tends to let these things cancel each other out.

This effect is an example of the heuristic nature of our risk vs reward decision making. It needs to operate quickly, so it relies on the irrational, instinctive part of our neural circuitry. We don’t take the time to weigh everything logically – we make a gut call. Marketers know the science behind this and continually use it to their advantage.

Warning labels are an easy legislative fix to try to plug this imperfectly human loophole. It seems to make sense, but it doesn’t really address the underlying factors. Given enough time and enough exposure, they can shift behaviors, but we shouldn’t rely on them too much.

Memories Made by Media

If you said the year 1967 to me, the memory that would pop into my head would be of Haight-Ashbury (ground zero for the counterculture movement), hippies and the summer of love. In fact, that same memory would effectively stand in for the period 1967 to 1969. In my mind, those three years were variations on the theme of Woodstock, the iconic music festival of 1969.

But none of those are my memories. I was alive, but my own memories of that time are indistinct and fuzzy. I was only 6 that year and lived in Alberta, some 1300 miles from the intersection of Haight and Ashbury Streets, so I have discarded my own personal representative memories. The ones I have were all created by images that came via media.

The Swapping of Memories

This is an example of the two types of memories we have – personal or “lived” memories and collective memories. Collective memories are the memories we get from outside, either for other people or, in my example, from media. As we age, there tends to be a flow back and forth between these two types or memories, with one type coloring the other.

One group of academics proposed an hourglass model as a working metaphor to understand this continuous exchange of memories – with some flowing one way and others flowing the other.  Often, we’re not even aware of which type of memory we’re recalling, personal or collective. Our memories are notoriously bad at reflecting reality.

What is true, however, is that our personal memories and our collective memories tend to get all mixed up. The lower our confidence in our personal memories, the more we tend to rely on collective memories. For periods before we were born, we rely solely on images we borrow.

Iconic Memories

What is true for all memories, ours or the ones we borrow from others, is we put them through a process called “leveling and sharpening.” This is a type of memory consolidation where we throw out some of the detail that is not important to us – this is leveling – and exaggerate other details to make it more interesting – i.e. sharpening.

Take my borrowed memories of 1967, for example. There was a lot more happening in the world than whatever was happening in San Francisco during the Summer of Love, but I haven’t retained any of it in my representative memory of that year. For example, there was a military coup in Greece, the first successful human heart transplant, the creation of the Corporation for Public Broadcasting, a series of deadly tornadoes in Chicago and Typhoon Emma left 140,000 people homeless in the Philippines. But none of that made it into my memory of 1967.

We could call the memories we do keep as “iconic” – which simply means we chose symbols to represent a much bigger and more complex reality – like everything that happened in a 365 day stretch 5 and a half decades ago.

Mass Manufactured Memories

Something else happens when we swap our own personal memories for collective memories – we find much more commonality in our memories. The more removed we become from our own lived experiences, the more our memories become common property.

If I asked you to say the first thing that comes to mind about 2002, you would probably look back through your own personal memory store to see if there was anything there. Chances are it would be a significant event from your own life, and this would make it unique to you. If we had a group of 50 people in a room and I asked that question, I would probably end up with 50 different answers.

But if I asked that same group what the first thing is that comes to mind when I say the year 1967, we would find much more common ground. And that ground would probably be defined by how each of us identify ourselves. For some you might have the same iconic memory that I do – that of the Haight Ashbury and the Summer of Love. Others may have picked the Vietnam War as the iconic memory from that year. But I would venture to guess that in our group of 50, we would end up with only a handful of answers.

When Memories are Made of Media

I am taking this walk down Memory Lane because I want to highlight how much we rely on the media to supply our collective memories. This dependency is critical, because once media images are processed by us and become part of our collective memories, they hold tremendous sway over our beliefs. These memories become the foundation for how we make sense of the world.

This is true for all media, including social media. A study in 2018 (Birkner & Donk) found that “alternative realities” can be formed through social media to run counter to collective memories formed from mainstream media. Often, these collective memories formed through social media are polarized by nature and are adopted by outlier fringes to justify extreme beliefs and viewpoints. This shows that collective memories are not frozen in time but are malleable – continually being rewritten by different media platforms.

Like most things mediated by technology, collective memories are splintering into smaller and smaller groupings, just like the media that are instrumental in their formation.

Sarcastic Much?

“Sarcasm is the lowest form of wit, but the highest form of intelligence.”

Oscar Wilde

I fear the death of sarcasm is nigh. The alarm bells started going when I saw a tweet from John Cleese that referenced a bit from “The Daily Show.”  In it, Trevor Noah used sarcasm to run circles around the logic of Supreme Court Justice Brett Kavanaugh, who had opined that Roe v. Wade should be overturned, essentially booting the question down to the state level to decide.

Against my better judgement, I started scrolling through the comments on the thread — and, within the first couple, found that many of those commenting had completely missed Noah’s point. They didn’t pick up on the sarcasm — at all. In fact, to say they missed the point is like saying Columbus “missed” India. They weren’t even in the same ocean. Perhaps not the same planet.

Sarcasm is my mother tongue. I am fluent in it. So I’m very comfortable with sarcasm. I tend to get nervous in overly sincere environments.

I find sarcasm requires almost a type of meta-cognition, where you have to be able to mentally separate the speaker’s intention from what they’re saying. If you can hold the two apart in your head, you can truly appreciate the art of sarcasm. It’s this finely balanced and recurrent series of contradictions — with tongue firmly placed in cheek — that makes sarcasm so potentially powerful. As used by Trevor Noah, it allows us to air out politically charged issues and consider them at a mental level at least one step removed from our emotional gut reactions.

As Oscar Wilde knew — judging by his quote at the beginning of the post — sarcasm can be a nasty form of humor, but it does require some brain work. It’s a bit of a mental puzzle, forcing us to twist an issue in our heads like a cognitive Rubik’s Cube, looking at it from different angles. Because of this, it’s not for everyone. Some people are just too earnest (again, with a nod to Mr. Wilde) to appreciate sarcasm.

The British excel at sarcasm. John Cleese is a high priest of sarcasm. That’s why I follow him on Twitter. Wilde, of course, turned sarcasm into art. But as Ricky Gervais (who has his own black belt in sarcasm) explains in this piece for Time, sarcasm — and, to be more expansive, all types of irony — have been built into the British psyche over many centuries. This isn’t necessarily true for Americans. 

“There’s a received wisdom in the U.K. that Americans don’t get irony. This is of course not true. But what is true is that they don’t use it all the time. It shows up in the smarter comedies but Americans don’t use it as much socially as Brits. We use it as liberally as prepositions in everyday speech. We tease our friends. We use sarcasm as a shield and a weapon. We avoid sincerity until it’s absolutely necessary. We mercilessly take the piss out of people we like or dislike basically. And ourselves. This is very important. Our brashness and swagger is laden with equal portions of self-deprecation. This is our license to hand it out.”

Ricky Gervais – Time, November 9, 2011

That was written just over a decade ago. I believe it’s even more true today. If you chose to use sarcasm in our age of fake news and social media, you do so at your peril. Here are three reasons why:

First, as Gervais points out, sarcasm doesn’t play equally across all cultures.  Americans — as one example — tend to be more sincere and, as such, take many things meant as sarcastic at face value. Sarcasm might hit home with a percentage of an U.S. audience, but it will go over a lot of American heads. It’s probably not a coincidence that many of those heads might be wearing MAGA hats.

Also, sarcasm can be fatally hamstrung by our TL;DR rush to scroll to the next thing. Sarcasm typically saves its payoff until the end. It intentionally creates a cognitive gap, and you have to be willing to stay with it to realize that someone is, in the words of Gervais, taking the “piss out of you.” Bail too early and you might never recognize it as sarcasm. I suspect more than a few of those who watched Trevor Noah’s piece didn’t stick through to the end before posting a comment.

Finally, and perhaps most importantly, social media tends to strip sarcasm of its context, leaving it hanging out there to be misinterpreted. If you are a regular watcher of “The Daily Show with Trevor Noah,” or “Last Week Tonight with John Oliver,” or even “Late Night with Seth Meyers” (who is one American that’s a master of sarcasm), you realize that sarcasm is part and parcel of it all. But when you repost any bit from any of these shows to social media, moving it beyond its typical audience, you have also removed all the warning signs that say “warning: sarcastic content ahead.” You are leaving the audience to their own devices to “get it.” And that almost never turns out well on social media.

You may say that this is all for the good. The world doesn’t really need more sarcasm. An academic study found that sarcastic messages can be more hurtful to the recipient than a sincere message. Sarcasm can cut deep, and because of this, it can lead to more interpersonal conflict.

But there’s another side to sarcasm. That same study also found that sarcasm can require us to be more creative. The mental mechanisms you use to understand sarcasm are the very same ones we need to use to be more thoughtful about important issues. It de-weaponizes these issues by using humor, while it also forces us to look at them in new ways.

Personally, I believe our world needs more Trevor Noahs, John Olivers and Seth Meyers. Sarcasm, used well, can make us a little smarter, a little more open-minded, and — believe it or not — a little more compassionate.

Using Science for Selling: Sometimes Yes, Sometimes No

A recent study out of Ohio State University seems like one of those that the world really didn’t need. The researchers were exploring whether introducing science into the marketing would help sell chocolate chip cookies.

And to us who make a living in marketing, this is one of those things that might make us say “Duh, you needed research to tell us that? Of course you don’t use science to sell chocolate chip cookies!”

But bear with me, because if we keep asking why enough, we can come up with some answers that might surprise us.

So, what did the researchers learn? I quote,

“Specifically, since hedonic attributes are associated with warmth, the coldness associated with science is conceptually disfluent with the anticipated warmth of hedonic products and attributes, reducing product valuation.”

Ohio State Study

In other words – much simpler and fewer in number – science doesn’t help sell cookies. And it’s because our brains think differently about some things than other.

For example, a study published in the journal Computers in Human Behavior (Casado-Aranda, Sanchez-Fernandez and Garcia) found that when we’re exposed to “hedonic” ads – ads that appeal to pleasurable sensations – the parts of our brain that retrieve memories kicks in. This isn’t true when we see utilitarian ads. Predictably, we approach those ads as a problem to be solved and engage the parts of our brain that control working memory and the ability to focus our attention.

Essentially, these two advertising approaches take two different paths in our awareness, one takes the “thinking” path and one takes the “feeling” path. Or, as Nobel Laureate Daniel Kahneman would say, one takes the “thinking slow” path and one takes the “thinking fast” path.

Yet another study begins to show why this may be so. Let’s go back to chocolate chip cookies for a moment. When you smell a fresh baked cookie, it’s not just the sensory appeal “in the moment” that makes the cookie irresistible. It’s also the memories it brings back for you. We know that how things smell is a particularly effective way to trigger this connection with the past. Certain smells – like that of cookies just out of the oven – can be the shortest path between today and some childhood memory. These are called associative memories. And they’re a big part of “feeling” something rather than just “thinking” about it.

At the University of California – Irvine – Neuroscientists discovered a very specific type of neuron in our memory centers that oversee the creation of new associative memories. They’re called “fan cells” and it seems that these neurons are responsible for creating the link between new input and those emotion-inducing memories that we may have tucked away from our past. And – critically – it seems that dopamine is the key to linking the two. When our brains “smell” a potential reward, it kicks these fan cells into gear and our brain is bathed in the “warm fuzzies.” Lead research Kei Igarashi, said,

“We never expected that dopamine is involved in the memory circuit. However, when the evidence accumulated, it gradually became clear that dopamine is involved. These experiments were like a detective story for us, and we are excited about the results.”

Kei Igarashi – University of California – Irvine

Not surprisingly – as our first study found – introducing science into this whole process can be a bit of a buzz kill. It would be like inviting Bill Nye the Science Guy to teach you about quantum physics during your Saturday morning cuddle time.

All of this probably seems overwhelmingly academic to you. Selling something like chocolate chip cookies isn’t something that should take three different scientific studies and strapping several people inside a fMRI machine to explain. We should be able to rely on our guts, and our guts know that science has no place in a campaign built on an emotional appeal.

But there is a point to all this. Different marketing approaches are handled by different parts of the brain, and knowing that allows us to reinforce our marketing intuition with a better understanding of why we humans do the things we do.

Utilitarian appeals activate the parts of the brain that are front and center, the data crunching, evaluating and rational parts of our cognitive machinery.

Hedonic appeals probe the subterranean depths of our brains, unpacking memories and prodding emotions below the thresholds of us being conscious of the process. We respond viscerally – which literally means “from our guts”.

If we’re talking about selling chocolate chip cookies, we have moved about as far towards the hedonic end of the scale as we can. At the other end we would find something like motor oil – where scientific messaging such as “advanced formulation” or “proven engine protection” would be more persuasive. But almost all other products fall somewhere in between. They are a mix of hedonic and utilitarian factors. And we haven’t even factored in the most significant of all consumer considerations – risk and how to avoid it. Think how complex things would get in our brains if we were buying a new car!

Buying chocolate chip cookies might seem like a no brainer – because – well – it almost is. Beyond dosing our neural pathways with dopamine, our brains barely kick in when considering whether to grab a bag of Chips Ahoy on our next trip to the store. In fact, the last thing you want your brain to do when you’re craving chewy chocolate is to kick in. Then you would start considering things like caloric intake and how you should be cutting down on processed sugar. Chocolate chip cookies might be a no-brainer, but almost nothing else in the consumer world is that simple.

Marketing is relying more and more on data. But data is typically restricted to answering “who”, “what”, “when” and “where” questions. It’s studies like the ones I shared here that start to pick apart the “why” of marketing.

And when things get complex, asking “why” is exactly what we need to do.

Sensationalizing Scam Culture

We seem to be fascinated by bad behavior. Our popular culture is all agog with grifters and assholes. As TV Blog’s Adam Buckman wrote in March: “Two brand-new limited series premiering this week appear to be part of a growing trend in which some of recent history’s most notorious innovators and disruptors are getting the scripted-TV treatment.”

The two series Buckman was talking about were “Super Pumped: The Battle for Uber,” about Uber CEO Travis Kalanick, and “The Dropout,” about Theranos founder Elizabeth Holmes.

But those are just two examples from a bumper crop of shows about bad behavior. My streaming services are stuffed with stories of scammers. In addition to the two series Buckman mentioned, I just finished Shonda Rhimes’ Netflix series “Inventing Anna,” about Anna Sorokin, who posed as an heiress named Anna Delvey.

All these treatments tread a tight wire of moral judgement, where the examples are presented as antisocial, but in a wink-and-a-nod kind of way, where we not so secretly admire these behaviors. Much as the actions are harmful to well-being of the collective “we,” they do appeal to the selfishness and ambition of “me.”

Most of the examples given are rags to riches to retribution stores (Holmes was an exception with her upper-middle-class background). The sky-high ambitions of Kalanick Holmes and Sorokin were all eventually brought back down to earth. Sorokin and Holmes both ended up in prison, and Kalanick was ousted from the company he founded.

But with the subtlest of twists, they didn’t have to end this way. They could have been the story of almost any corporate America hustler who triumphed. With a little more substance and a little less scam, you could swap Elizabeth Holmes for Steve Jobs. They even dressed the same.

Obviously, scamming seems to sell. These people fascinate us. Part of the appeal is no doubt due a class conflict narrative: the scrappy hustler climbing the social ranks by whatever means possible. We love to watch “one of us” pull the wool over the eyes of the social elite.

In the case of Anna Sorokin, Laura Craik dissects our fascination in a piece published in the UK’s Evening Standard:

“The reason people are so obsessed with Sorokin is simple: she had the balls to pull off on a grand scale what so many people try and fail to pull off on a small one. To use a phrase popular on social media, Sorokin succeeded in living her best life — right down to the clothes she wore in court, chosen by a stylist. Like Jay Gatsby, she was a deeply flawed embodiment of The American Dream: a person from humble beginnings who rose to achieve wealth and social status. Only her wealth was borrowed and her social status was conferred via a chimera of untruths.”

Laura Craik – UK Evening Standard

This type of behavior is nothing new. It’s always been a part of us. In 1513, a Florentine bureaucrat named Niccolo Machiavelli gave it a name — actually, his name. In writing “The Prince,” he condoned bad behavior as long as the end goal was to elevate oneself. In a Machiavellian world, it’s always open season on suckers: “One who deceives will always find those who allow themselves to be deceived.”

For the past five centuries, Machiavellianism was always synonymous with evil. It was a recognized character flaw, described as “a personality trait that denotes cunningness, the ability to be manipulative, and a drive to use whatever means necessary to gain power. Machiavellianism is one of the traits that forms the Dark Triad, along with narcissism and psychopathy.”

Now, however, that stigma seems to be disappearing. In a culture obsessed with success, Machiavellianism becomes a justifiable means to an end, so much so that we’ve given this culture its own hashtag: #scamculture: “A scam culture is one in which scamming has not only lost its stigma but is also valorized. We rebrand scamming as ‘hustle,’ or the willingness to commodify all social ties, and this is because the ‘legitimate’ economy and the political system simply do not work for millions of Americans.”

It’s a culture that’s very much at home in Silicon Valley. The tech world is steeped in Machiavellianism. Its tenets are accepted — even encouraged — business practices in the Valley. “Fake it til you make it” is tech’s modus operandi. The example of Niccolo Machiavelli has gone from being a cautionary tale to a how-to manual.

But these predatory practices come at a price. Doing business this way destroys trust. And trust is still, by far, the best strategy for our mutual benefit. In behavioral economics, there’s something called “tit for tat,” which according to Wikipedia “posits that a person is more successful if they cooperate with another person. Implementing a tit-for-tat strategy occurs when one agent cooperates with another agent in the very first interaction and then mimics their subsequent moves. This strategy is based on the concepts of retaliation and altruism.”

In countless game theory simulations, tit for tat has proven to be the most successful strategy for long-term success. It assumes a default position of trust, only moving to retaliation if required.

Our society needs trust to function properly. In a New York Times op-ed entitled “Why We Need to Address Scam Culture,” Tressie McMillan Cottom writes,  

“Scams weaken our trust in social institutions, but their going mainstream — divorced from empathy for the victims or stigma for the perpetrators — means that we have accepted scams as institutions themselves.”

Tressie McMillan Cottom – NY Times

The reason that trust is more effective than scamming is that predatory practices are self-limiting. You can only be a predator if you have enough prey. In a purely Machiavellian world, trust disappears — and there are no easy marks to prey upon.

I am Generation Jones

I was born in 1961. I always thought that technically made me a baby boomer. But I recently discovered that I am, in fact, part of Generation Jones.

If you haven’t heard of that term (as I had not, until I read a post on it a few weeks ago) Generation Jones refers to people born from 1955 to 1964 — a cusp generation squeezed between the massive boomer block and Gen X.

That squares with me. I always somehow knew I wasn’t really a boomer, but I also knew I wasn’t Gen X. And now I know why. I, along with Barack Obama and Wayne Gretzky, was squarely in the middle of Generation Jones.

I always felt the long shadow of World War II defined baby boomers, but it didn’t define me. My childhood felt like eons removed from the war. Most of the more-traumatic wounds had healed by the time I was riding my trike through the relatively quiet suburban streets of Calgary, Alberta.

I didn’t appreciate the OK Boomer memes, not because I was the butt of them, but more because I didn’t really feel they applied to me. They didn’t hit me where I live. It was like I was winged by a shot meant for someone else.

OK Boomer digs didn’t really apply to my friends and contemporaries either, all of whom are also part of Generation Jones. For the most part, we’re trying to do our best dealing with climate change, racial inequality, more fluid gender identification and political polarization. We get it. Is there entitlement? Yeah, more than a little. But we’re trying.

And I also wasn’t part of Gen X. I wasn’t a latchkey kid. My parents didn’t obsess over the almighty dollar, so I didn’t feel a need to push back against it. My friends and I worked a zillion hours, because we were — admittedly — still materialistic. But it was a different kind of materialism, one edged with more than a little anxiety.

I hit the workforce in the early ‘80s, right in the middle of a worldwide recession. Generation Jones certainly wanted to get ahead, but we also wanted to keep our jobs, because if we lost them, there was no guarantee we’d find another.

When boomers were entering the workforce, through the 1970s, Canada’s unemployment rate hovered in the 6% to 8% range (U.S. numbers varied but roughly followed the same pattern). In 1982, the year I tried to start my career, it suddenly shot up to 13%. Through the ‘80s, as Gen X started to get their first jobs, it declined again to the 8% range. Generation Jones started looking for work just when a job was historically the hardest to find.

It wasn’t just the jobless rate. Interest rates also skyrocketed to historic levels in the early ‘80s. Again, using data from the Bank of Canada, their benchmark rate peaked at an astronomical 20.78% the same month I turned 20, in 1981. Not only couldn’t we find a job, we couldn’t have afforded credit even if we could get a job.

So yes, we were trying to keep up with the Joneses — this is where the name for our generation comes from, coined by social commentator Jonathon Pontell — but it wasn’t all about getting ahead. A lot of it was just trying to keep our heads above water.

We were a generation moving into adulthood at the beginning of HIV/Aids, Reaganomics, globalization and the mass deindustrialization of North American. All the social revolutions of the ‘60s and ‘70s had crystallized to the point where they now had real-world consequences. We were figuring out a world that seemed to be pivoting sharply.

As I said, I always felt that I was somewhat accidentally lodged between baby boomer and Gen X, wading my way through the transition.

Part of that transition involved the explosion of technology that became much more personal at the beginning of the 1980s.  To paraphrase Shakespeare in “Twelfth Night”: Some are born with technology, some achieve technology, and some have technology thrust upon them.

Generation Jones is in the last group.

True boomers could make the decision to ignore technology and drift through life just adopting what they absolutely had to. Gen X grew up with the rudiments of technology, making it more familiar territory for them. The leading edge of that generation started entering the workforce in the mid 80’s. Computers were becoming more common. The Motorala “brick” cellphone had debuted. Technology was becoming ubiquitous – unable to be ignored. 

But we were caught in between. We had to make a decision: Do we embrace technology, or do we fight against it? A lot of that decision depended on what we wanted to do for a living. Through the ‘80s, one by one, industries were being transformed by computers and digitalization.

Often, we of Generation Jones got into our first jobs working on the technology of yesterday — and very early in our careers, we were forced to adopt the technologies of tomorrow. Often, we of Generation Jones got into our first jobs working on the technology of yesterday and very early in our careers, we were forced to adopt the technologies of tomorrow.

I started as a radio copywriter in 1982, and my first ads were written on an IBM Selectric and produced by cutting and patching two-track audio tape together on reel-to-reel machine with razor blades and splicing tape. Just a few years later, I was writing on an Apple IIe, and ads were starting to be recorded digitally. That shift in technology happened just when our generation was beginning our careers.  Some of us went willingly, some of us went kicking and screaming.

This straddling two very different worlds seems to personify my generation. I think, with the hindsight of history, we will identify the early ‘80s as a period of significant transition in almost every aspect of our culture. Obviously, all generations had to navigate that transition, but for Generation Jones, that period just happened to coincide with what is typically the biggest transition for anyone in any generation: the passing from childhood to adulthood. It is during this time when we take the experiences of growing up and crystallize them into the foundations of who we will be for the rest of our lives.

For Generation Jones, those foundations had to be built on the fly, as the ground kept moving beneath our feet.

Making Time for Quadrant Two

Several years ago, I read Stephen Covey’s “The 7 Habits of Highly Effective People.” It had a lasting impact on me. Through my life, I have found myself relearning those lessons over and over again.

One of them was the four quadrants of time management. How we spend our time in these quadrants determines how effective we are.

 Imagine a box split into four quarters. On the upper left box, we’ll put a label: “Important and Urgent.” Next to it, in the upper right, we’ll put a label saying “Important But Not Urgent.” The label for the lower left is “Urgent but Not Important.” And the last quadrant — in the lower right — is labeled “Not Important nor Urgent.”

The upper left quadrant — “Important and Urgent” — is our firefighting quadrant. It’s the stuff that is critical and can’t be put off, the emergencies in our life.

We’ll skip over quadrant two — “Important But Not Urgent” — for a moment and come back to it.

In quadrant three — “Urgent But Not Important” — are the interruptions that other people brings to us. These are the times we should say, “That sounds like a you problem, not a me problem.”

Quadrant four is where we unwind and relax, occupying our minds with nothing at all in order to give our brains and body a chance to recharge. Bingeing Netflix, scrolling through Facebook or playing a game on our phones all fall into this quadrant.

And finally, let’s go back to quadrant two: “Important But Not Urgent.” This is the key quadrant. It’s here where long-term planning and strategy live. This is where we can see the big picture.

The secret of effective time management is finding ways to shift time spent from all the other quadrants into quadrant two. It’s managing and delegating emergencies from quadrant one, so we spend less time fire-fighting. It’s prioritizing our time above the emergencies of others, so we minimize interruptions in quadrant three. And it’s keeping just enough time in quadrant four to minimize stress and keep from being overwhelmed.

The lesson of the four quadrants came back to me when I was listening to an interview with Dr. Sandro Galea, epidemiologist and author of “The Contagion Next Time.” Dr. Galea was talking about how our health care system responded to the COVID pandemic. The entire system was suddenly forced into quadrant one. It was in crisis mode, trying desperately to keep from crashing. Galea reminded us that we were forced into this mode, despite there being hundreds of lengthy reports from previous pandemics — notably the SARS crisis–– containing thousands of suggestions that could have helped to partially mitigate the impact of COVID.

Few of those suggestions were ever implemented. Our health care system, Galea noted, tends to continually lurch back and forth within quadrant one, veering from crisis to crisis. When a crisis is over, rather than go to quadrant two and make the changes necessary to avoid similar catastrophes in the future, we put the inevitable reports on a shelf where they’re ignored until it is — once again — too late.

For me, that paralleled a theme I have talked about often in the past — how we tend to avoid grappling with complexity. Quadrant two stuff is, inevitably, complex in nature. The quadrant is jammed with what we call wicked problems. In a previous column, I described these as, “complex, dynamic problems that defy black-and-white solutions. These are questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough — for now.’”

That’s quadrant two in a nutshell. Quadrant-one problems must be triaged into a sort of false clarity. You have to deal with the critical stuff first. The nuances and complexity are, by necessity, ignored. That all gets pushed to quadrant two, where we say we will deal with it “someday.”

Of course, someday never comes. We either stay in quadrant one, are hijacked into quadrant three, or collapse through sheer burn-out into quadrant four. The stuff that waits for us in quadrant two is just too daunting to even consider tackling.

This has direct implications for technology and every aspect of the online world. Our industry, because of its hyper-compressed timelines and the huge dollars at stake, seems firmly lodged in the urgency of quadrant one. Everything on our to-do list tends to be a fire we have to put out. And that’s true even if we only consider the things we intentionally plan for. When we factor in the unplanned emergencies, quadrant one is a time-sucking vortex that leaves nothing for any of the other quadrants.

But there is a seemingly infinite number of quadrant two things we should be thinking about. Take social media and privacy, for example. When an online platform has a massive data breach, that is a classic quadrant one catastrophe. It’s all hands on deck to deal with the crisis. But all the complex questions around what our privacy might look like in a data-inundated world falls into quadrant two. As such, they are things we don’t think much about. It’s important, but it’s not urgent.

Quadrant two thinking is systemic thinking, long-term and far-reaching. It allows us to build the foundations that helps to mitigate crisis and minimize unintended consequences.

In a world that seems to rush from fire to fire, it is this type of thinking that could save our asses.

The News Cycle, Our Attention Span and that Oscar Slap

If your social media feed is like mine, it was burning up this Monday with the slap heard around the world. Was Will Smith displaying toxic masculinity? Was “it was a joke” sufficient defence for Chris Rock’s staggering lack of ability to read the room? Was Smith’s acceptance speech legendary or just really, really lame?

More than a few people just sighed and chalked it up as another scandal up for the beleaguered awards show. This was one post I saw from a friend on Facebook, “People smiling and applauding as if an assault never happened is probably Hollywood in a nutshell.”

Whatever your opinion, the world was fascinated by what happened. The slap trended number one on Twitter through Sunday night and Monday morning. On CNN, the top trending stories on Monday morning were all about the “slap.” You would have thought that there was nothing happening in the world that was more important than one person slapping another. Not the world teetering on the edge of a potential world war. Not a global economy that can’t seem to get itself in gear. Not a worldwide pandemic that just won’t go away and has just pushed Shanghai – a city of 30 million – back into a total lock down.

And the spectre of an onrushing climactic disaster? Nary a peep in Monday’s news cycle.

We commonly acknowledge – when we do take the time to stop and think about it – that our news cycles have about the same attention span as a 4-year-old on Christmas morning. No matter what we have in our hands, there’s always something brighter and shinier waiting for us under the tree. We typically attribute this to the declining state of journalism. But we – the consumers of news – are the ones that continually ignore the stories that matter in favour of gossipy tidbits.

This is just the latest example of that. It is nothing more than human nature. But there is a troubling trend here that is being accelerated by the impact of social media. This is definitely something we should pay attention to.

The Confounding Nature of Complexity

Just last week, I talked about something psychologists call a locus of control. Essentially it is defined by the amount of control you feel you have over your life. In times of stress, unpredictability or upheaval, our own perceived span of control tends to narrow to the things we have confidence we can manage. Our ability to cope draws inward, essentially circling the wagons around the last vestiges of our capability to direct our own circumstances. 

I believe the same is true with our ability to focus attention. The more complex the world gets, the more we tend to focus on things that we can easily wrap our minds around. It has been shown repeatedly that anxiety impacts the ability of our brain to focus on things. A study from Finland’s Abo Akademi University showed that anxiety reduces the ability of the brain to focus on tasks. It eats away at our working memory, leaving us with a reduced capacity to integrate concepts and work things out. Complex, unpredictable situations natural raise our level of anxiety, leading us to retreat to things we don’t have to work too hard to understand.

The irony here is the more we are aware of complex and threatening news stories, the more we go right past them to things like the Smith-Rock story. It’s like catnip to a brain that’s trying to retreat from the real news because we can’t cope with it.

This isn’t necessarily the fault of journalism, it’s more a limitation of our own brains. On Monday morning, CNN offered plenty of coverage dealing with the new airstrikes in Ukraine, Biden’s inflammatory remarks about Putin, Trump’s attempts to block Congress from counting votes and the restriction of LGBTQ awareness in the classrooms of Florida. But none of those stories were trending. What was trending were three stories about Rock and Smith, one about the Oscar winners and another about a 1600-pound shark. That’s what we were collectively reading.

False Familiarity

It’s not just that the news is too complex for us to handle that made the Rock/Smith story so compelling. Our built-in social instincts also made it irresistible.

Evolution has equipped us with a highly attuned social antennae. Humans are herders and when you travel in a herd, your ability to survive is highly dependent on picking up signals from your fellow herders. We have highly evolved instincts to help us determine who we can trust and who we should protect ourselves from. We are quick to judge others, and even quicker to gossip about behavior that steps over those invisible boundaries we call social norms.

For generations, these instincts were essential when we had keep tabs on the people closest to us. But with the rise of celebrity culture in the last century, we now apply those same instincts to people we think we know. We pass judgement on the faces we see on TV and in social media. We have a voracious appetite for gossip about the super-rich and the super famous.

Those foibles may be ours and ours alone, but they not helped by the fact that certain celebrities – namely one Mr. Smith – feels compelled to share way too much about himself with the public at large. Witness his long and tear-laden acceptance speech. Even though I have only a passing interest in the comings and goings of Will and Jada, I know more about their sex lives than that of my closest friends. The social norm that restricts bedroom talk amongst our friends and family is not there with the celebrities we follow. We salivate over salacious details.

No Foul, No Harm?

That’s the one-two punch (sorry, I had to go there) that made the little Oscar ruckus such a hot news item. But what’s the harm? It’s just a momentary distraction for the never-ending shit-storm that defines our daily existence, right?

Not quite.

The more we continually take the path of least resistance in our pursuit of information, the harder it becomes for us to process the complex concepts that make up our reality. When that happens, we tend to attribute too much importance and meaning to these easily digestible nuggets of gossip. As we try to understand complex situations (which covers pretty much everything of importance in our world today) we start relying too much on cognitive short cuts like availability bias and representative bias. In the first case, we apply whatever information we have at hand to every situation and in the second we resort to substituting stereotypes and easy labels in place of trying to understand the reality of an individual or group.

Ironically, it’s exactly this tendency towards cognitive laziness that was skewered in one of Sunday night’s nominated features, Adam McKay’s Don’t Look Up.

Of course, it was ignored. As Will Smith said, sometimes, “art imitates life.”