As the “Office” Goes, What May Go With It?

In 2017, Apple employees moved into the new Apple headquarters, called the Ring, in Cupertino, California. This was the last passion project of Steve Jobs, who personally made the pitch to Cupertino City Council just months before he passed away. And its design was personally overseen by Apple’s then Chief Design Office Jony Ive. The new headquarters were meant to give Apple’s Cupertino employees the ultimate “sense of place”. They were designed to be organic and flexible, evolving to continue to meet their needs.

Of course, no one saw a global pandemic in the future. COVID-19 drove almost all those employees to work from home. The massive campus sat empty. And now, as Apple tries to bring everyone back to the Ring, it seems what has evolved is the expectations of the employees, who have taken a hard left turn away from the very idea of “going to work.”

Just last month, Apple had to backtrack on its edict demanding that everyone start coming back to the office three days a week. A group which calls itself “Apple Together” published a letter asking for the company to embrace a hybrid work schedule that formalized a remote workplace. And one of Apple’s leading AI engineers, Ian Goodfellow, resigned in May because of Apple’s insistence on going back to the office.

Perhaps Apple’s Ring is just the most elegant example of a last-gasp concept tied to a generation that is rapidly fading from the office into retirement. The Ring could be the world’s biggest and most expensive anachronism. 

The Virtual Workplace debate is not new for Silicon Valley. Almost a decade ago, Marissa Mayer also issued a “Back to the Office” edict when she came from Google to take over the helm at Yahoo. A company memo laid out the logic:

“To become the absolute best place to work, communication and collaboration will be important, so we need to be working side-by-side. That is why it is critical that we are all present in our offices. Some of the best decisions and insights come from hallway and cafeteria discussions, meeting new people, and impromptu team meetings. Speed and quality are often sacrificed when we work from home. We need to be one Yahoo!, and that starts with physically being together.”

Marissa Mayer, Yahoo Company Memo

The memo was not popular with Yahooligans. I was still making regular visits to the Valley back then and heard first-hand the grumblings from some of them. My own agency actually had a similar experience, albeit on a much smaller scale.

Over the past decade – until COVID – employees and employers have tentatively tested the realities of a remote workplace. But in the blink of an eye, the pandemic turned this ongoing experiment into the only option available. If businesses wanted to continue operating, they had to embrace working from home. And if employees wanted to keep their jobs, they had to make room on the dining room table for their laptop. Overnight, Zoom meetings and communicating through Slack became the new normal.

Sometimes, necessity is the mother of adoption. And with a 27 (and counting) month runway to get used to it, it appears that the virtual workplace is here to stay.

In some ways, the virtual office represents the unbundling of our worklife. Because our world was constrained by physical limitations of distance, we tended to deal with a holistic world. Everything came as a package that was assembled by proximity. We operated inside an ecosystem that shared the same physical space. This was true for almost everything in our lives, including our jobs. The workplace was a place, with physical and social properties that existed within that place.

But technology allows us to unbundle that experience. We can separate work from place. We pick and choose what seems to be the most important things we need to do our jobs and take it with us, free from the physical restraints that once kept us all in the same place in the same time. In that process, there are both intended and unintended consequences.

On the face of it, freeing our work from its physical constraints (when this is possible) makes all kinds of sense. For the employer, it eliminates the need for maintaining a location, along with the expense of doing so. And, when you can work anywhere, you can also recruit from anywhere, dramatically opening up the talent pool.

For the employee, it’s probably even more attractive. You can work on your schedule, giving you more flexibility to maintain a healthy work-life balance. Long and frustrating commutes are eliminated. Your home can be wherever you want to live, rather than where you have to live because of your job.

Like I said, when you look at all these intended consequences, a virtual workplace seems to be all upside, with little downside. However, the downsides are starting to show through the cracks created by the unintended consequences.

To me, this seems somewhat analogous to the introduction of monoculture agriculture. You could say this also represented the unbundling of farming for the sake of efficiency. Focusing on one crop in one place in a time made all kinds of sense. You could standardize planting, fertilizing, watering and harvesting based on what was best for the chosen crop. It allowed for the introduction of machinery, increasing yields and lowering costs. Small wonder that over the past 2 centuries – and especially since World War II – the world rushed to embrace monoculture agriculture.

But now we’re beginning to see the unintended consequence. Dr. Frank Uekotter, Professor of Environmental Humanities at the University of Birmingham, calls monoculturalism a “centuries long stumble.” He warns that it has developed its own momentum, ““Somehow that fledgling operation grew into a monster. We may have to cut our losses at some point, but monoculture has absorbed decades of huge investment and moving away from it will be akin to attempting a handbrake turn in a supertanker.”

We’re learning – probably too late – that nature never intended plants to be surrounded only by other plants of the same kind. Monocultures lead to higher rates of disease and the degradation of the environment. The most extreme example of this is how monocultures of African palm oil orchards are swallowing the biodiverse Amazon rain forest at an alarming rate. Sometimes, as Joni Mitchell reminds us, “You don’t know what you’ve got til it’s gone.”

The same could be true for the traditional workplace. I think Marissa Mayer was on to something. We are social animals and have evolved to share spaces with others of our species. There is a vast repertoire of evolved mechanisms and strategies that make us able to function in these environments. While a virtual workplace may be logical, we may be sacrificing something more ephemeral that lies buried in our humanness. We can’t see it because we’re not exactly sure what it is, but we’ll know it when we lose it.

Maybe it’s loyalty. A few weeks ago, the Wharton School of Business published an article entitled, “Is Workplace Loyalty Gone for Good?” We have all heard of the “Great Resignation.” Last year, the US had over 40 million people quit their jobs. The advent of the Virtual Workplace has also meant a virtual job market. Employees are in the driver’s seat. Everything is up for renegotiation. As the article said, “the modern workplace has become increasingly transactional.”

Maybe that’s a good thing. Maybe not. That’s the thing with unintended consequences. Only time will tell.

Minority Report Might Be Here — 30 Years Early

“Sometimes, in order to see the light, you have to risk the dark.”

Iris Hineman – 2002’s Minority Report

I don’t usually look to Hollywood for deep philosophical reflection, but today I’m making an exception. Steven Spielberg’s 2002 film Minority Report is balanced on some fascinating ground, ethically speaking. For me, it brought up a rather interesting question – could you get a clear enough picture of someone’s mental state through their social media feed that would allow you to predict pathological behavior? And – even if you could – should you?

If you’re not familiar with the movie, here is the background on this question. In the year 2054, there are three individuals that possess a psychic ability to see events in the future, primarily premeditated murders. These individuals are known at Precognitives, or Precogs. Their predictions are used to set up a PreCrime Division in Washington, DC, where suspects are arrested before they can commit the crime.

Our Social Media Persona

A persona is a social façade – a mask we don that portrays a role we play in our lives. For many of us that now includes the digital stage of social media. Here too we have created a persona, where we share the aspects of ourselves that we feel we need to put out there on our social media platform of choice.

What may surprise us, however, is that even though we supposedly have control over what we share, even that will tell a surprising amount about who we are – both intentionally and unintentionally. And, if those clues are troubling, does our society have a responsibility – or the right – to proactively reach out?

In a commentary published in the American Journal of Psychiatry, Dr. Shawn McNeil said of social media,

“Scientists should be able to harness the predictive potential of these technologies in identifying those most vulnerable. We should seek to understand the significance of a patient’s interaction with social media when taking a thorough history. Future research should focus on the development of advanced algorithms that can efficiently identify the highest-risk individuals.”

Dr. Shawn McNeil

Along this theme, a 2017 study (Liu & Campbell) found that where we fall in the so-called “Big Five” personality traits – neuroticism, extraversion, openness, agreeableness and conscientiousness – as well as the “Big Two” metatraits – plasticity and stability – can be a pretty accurate prediction of how we use social media.

But what if we flip this around?  If we just look at a person’s social media feed, could we tell what their personality traits and metatraits are with a reasonable degree of accuracy? Could we, for instance, assess their mental stability and pick up the warning signs that they might be on the verge of doing something destructive, either to themselves or to someone else? Following this logic, could we spot a potential crime before it happens?

Pathological Predictions

Police are already using social media to track suspects and find criminals. But this is typically applied after the crime has occurred. For instance, police departments regularly scan social media using facial recognition technology to track down suspects. They comb a suspect’s social media feeds to establish whereabouts and gather evidence. Of course, you can only scan social content that people are willing to share. But when these platforms are as ubiquitous as they are, it’s constantly astounding that people share as much as they do, even when they’re on the run from the law.

There are certainly ethical questions about mining social media content for law enforcement purposes. For example, facial recognition algorithms tend to have flaws when it comes to false positives with those of darker complexion, leading to racial profiling concerns. But at least this activity tries to stick with the spirit of the tenet that our justice system is built on: you are innocent until proven guilty.

There must be a temptation, however, to go down the same path as Minority Report and try to pre-empt crime – by identifying a “Precrime”.

Take a school shooting, for example. In the May 31 issue of Fortune, senior technology journalist Jeremy Kahn asked this question: “Could A.I. prevent another school shooting?” In the article, Kahn referenced a study where a team at the Cincinnati Children’s Hospital Medical Center used Artificial Intelligence software to analyze transcripts of teens who went through a preliminary interview with psychiatrists. The goal was to see how well the algorithm compared to more extensive assessments by trained psychiatrists to see if the subject had a propensity to commit violence. They found that assessments matched about 91% of the time.

I’ll restate that so the point hits home: An A.I. algorithm that scanned a preliminary assessment could match much more extensive assessments done by expert professionals 9 out of 10 times –  even without access to the extensive records and patient histories that the psychiatrists had at their disposal.

Let’s go one step further and connect those two dots: If social media content could be used to identify potentially pathological behaviors, and if an AI could then scan that content to predict whether those behaviors could lead to criminal activities, what do we do with that?

It puts us squarely on a very slippery down slope, but we have to acknowledge that we are getting very close to a point where technology forces us to ask a question we’ve never been able to ask before: “If we – with a reasonable degree of success – could prevent violent crimes that haven’t happened yet, should we?”

Sarcastic Much?

“Sarcasm is the lowest form of wit, but the highest form of intelligence.”

Oscar Wilde

I fear the death of sarcasm is nigh. The alarm bells started going when I saw a tweet from John Cleese that referenced a bit from “The Daily Show.”  In it, Trevor Noah used sarcasm to run circles around the logic of Supreme Court Justice Brett Kavanaugh, who had opined that Roe v. Wade should be overturned, essentially booting the question down to the state level to decide.

Against my better judgement, I started scrolling through the comments on the thread — and, within the first couple, found that many of those commenting had completely missed Noah’s point. They didn’t pick up on the sarcasm — at all. In fact, to say they missed the point is like saying Columbus “missed” India. They weren’t even in the same ocean. Perhaps not the same planet.

Sarcasm is my mother tongue. I am fluent in it. So I’m very comfortable with sarcasm. I tend to get nervous in overly sincere environments.

I find sarcasm requires almost a type of meta-cognition, where you have to be able to mentally separate the speaker’s intention from what they’re saying. If you can hold the two apart in your head, you can truly appreciate the art of sarcasm. It’s this finely balanced and recurrent series of contradictions — with tongue firmly placed in cheek — that makes sarcasm so potentially powerful. As used by Trevor Noah, it allows us to air out politically charged issues and consider them at a mental level at least one step removed from our emotional gut reactions.

As Oscar Wilde knew — judging by his quote at the beginning of the post — sarcasm can be a nasty form of humor, but it does require some brain work. It’s a bit of a mental puzzle, forcing us to twist an issue in our heads like a cognitive Rubik’s Cube, looking at it from different angles. Because of this, it’s not for everyone. Some people are just too earnest (again, with a nod to Mr. Wilde) to appreciate sarcasm.

The British excel at sarcasm. John Cleese is a high priest of sarcasm. That’s why I follow him on Twitter. Wilde, of course, turned sarcasm into art. But as Ricky Gervais (who has his own black belt in sarcasm) explains in this piece for Time, sarcasm — and, to be more expansive, all types of irony — have been built into the British psyche over many centuries. This isn’t necessarily true for Americans. 

“There’s a received wisdom in the U.K. that Americans don’t get irony. This is of course not true. But what is true is that they don’t use it all the time. It shows up in the smarter comedies but Americans don’t use it as much socially as Brits. We use it as liberally as prepositions in everyday speech. We tease our friends. We use sarcasm as a shield and a weapon. We avoid sincerity until it’s absolutely necessary. We mercilessly take the piss out of people we like or dislike basically. And ourselves. This is very important. Our brashness and swagger is laden with equal portions of self-deprecation. This is our license to hand it out.”

Ricky Gervais – Time, November 9, 2011

That was written just over a decade ago. I believe it’s even more true today. If you chose to use sarcasm in our age of fake news and social media, you do so at your peril. Here are three reasons why:

First, as Gervais points out, sarcasm doesn’t play equally across all cultures.  Americans — as one example — tend to be more sincere and, as such, take many things meant as sarcastic at face value. Sarcasm might hit home with a percentage of an U.S. audience, but it will go over a lot of American heads. It’s probably not a coincidence that many of those heads might be wearing MAGA hats.

Also, sarcasm can be fatally hamstrung by our TL;DR rush to scroll to the next thing. Sarcasm typically saves its payoff until the end. It intentionally creates a cognitive gap, and you have to be willing to stay with it to realize that someone is, in the words of Gervais, taking the “piss out of you.” Bail too early and you might never recognize it as sarcasm. I suspect more than a few of those who watched Trevor Noah’s piece didn’t stick through to the end before posting a comment.

Finally, and perhaps most importantly, social media tends to strip sarcasm of its context, leaving it hanging out there to be misinterpreted. If you are a regular watcher of “The Daily Show with Trevor Noah,” or “Last Week Tonight with John Oliver,” or even “Late Night with Seth Meyers” (who is one American that’s a master of sarcasm), you realize that sarcasm is part and parcel of it all. But when you repost any bit from any of these shows to social media, moving it beyond its typical audience, you have also removed all the warning signs that say “warning: sarcastic content ahead.” You are leaving the audience to their own devices to “get it.” And that almost never turns out well on social media.

You may say that this is all for the good. The world doesn’t really need more sarcasm. An academic study found that sarcastic messages can be more hurtful to the recipient than a sincere message. Sarcasm can cut deep, and because of this, it can lead to more interpersonal conflict.

But there’s another side to sarcasm. That same study also found that sarcasm can require us to be more creative. The mental mechanisms you use to understand sarcasm are the very same ones we need to use to be more thoughtful about important issues. It de-weaponizes these issues by using humor, while it also forces us to look at them in new ways.

Personally, I believe our world needs more Trevor Noahs, John Olivers and Seth Meyers. Sarcasm, used well, can make us a little smarter, a little more open-minded, and — believe it or not — a little more compassionate.

Using Science for Selling: Sometimes Yes, Sometimes No

A recent study out of Ohio State University seems like one of those that the world really didn’t need. The researchers were exploring whether introducing science into the marketing would help sell chocolate chip cookies.

And to us who make a living in marketing, this is one of those things that might make us say “Duh, you needed research to tell us that? Of course you don’t use science to sell chocolate chip cookies!”

But bear with me, because if we keep asking why enough, we can come up with some answers that might surprise us.

So, what did the researchers learn? I quote,

“Specifically, since hedonic attributes are associated with warmth, the coldness associated with science is conceptually disfluent with the anticipated warmth of hedonic products and attributes, reducing product valuation.”

Ohio State Study

In other words – much simpler and fewer in number – science doesn’t help sell cookies. And it’s because our brains think differently about some things than other.

For example, a study published in the journal Computers in Human Behavior (Casado-Aranda, Sanchez-Fernandez and Garcia) found that when we’re exposed to “hedonic” ads – ads that appeal to pleasurable sensations – the parts of our brain that retrieve memories kicks in. This isn’t true when we see utilitarian ads. Predictably, we approach those ads as a problem to be solved and engage the parts of our brain that control working memory and the ability to focus our attention.

Essentially, these two advertising approaches take two different paths in our awareness, one takes the “thinking” path and one takes the “feeling” path. Or, as Nobel Laureate Daniel Kahneman would say, one takes the “thinking slow” path and one takes the “thinking fast” path.

Yet another study begins to show why this may be so. Let’s go back to chocolate chip cookies for a moment. When you smell a fresh baked cookie, it’s not just the sensory appeal “in the moment” that makes the cookie irresistible. It’s also the memories it brings back for you. We know that how things smell is a particularly effective way to trigger this connection with the past. Certain smells – like that of cookies just out of the oven – can be the shortest path between today and some childhood memory. These are called associative memories. And they’re a big part of “feeling” something rather than just “thinking” about it.

At the University of California – Irvine – Neuroscientists discovered a very specific type of neuron in our memory centers that oversee the creation of new associative memories. They’re called “fan cells” and it seems that these neurons are responsible for creating the link between new input and those emotion-inducing memories that we may have tucked away from our past. And – critically – it seems that dopamine is the key to linking the two. When our brains “smell” a potential reward, it kicks these fan cells into gear and our brain is bathed in the “warm fuzzies.” Lead research Kei Igarashi, said,

“We never expected that dopamine is involved in the memory circuit. However, when the evidence accumulated, it gradually became clear that dopamine is involved. These experiments were like a detective story for us, and we are excited about the results.”

Kei Igarashi – University of California – Irvine

Not surprisingly – as our first study found – introducing science into this whole process can be a bit of a buzz kill. It would be like inviting Bill Nye the Science Guy to teach you about quantum physics during your Saturday morning cuddle time.

All of this probably seems overwhelmingly academic to you. Selling something like chocolate chip cookies isn’t something that should take three different scientific studies and strapping several people inside a fMRI machine to explain. We should be able to rely on our guts, and our guts know that science has no place in a campaign built on an emotional appeal.

But there is a point to all this. Different marketing approaches are handled by different parts of the brain, and knowing that allows us to reinforce our marketing intuition with a better understanding of why we humans do the things we do.

Utilitarian appeals activate the parts of the brain that are front and center, the data crunching, evaluating and rational parts of our cognitive machinery.

Hedonic appeals probe the subterranean depths of our brains, unpacking memories and prodding emotions below the thresholds of us being conscious of the process. We respond viscerally – which literally means “from our guts”.

If we’re talking about selling chocolate chip cookies, we have moved about as far towards the hedonic end of the scale as we can. At the other end we would find something like motor oil – where scientific messaging such as “advanced formulation” or “proven engine protection” would be more persuasive. But almost all other products fall somewhere in between. They are a mix of hedonic and utilitarian factors. And we haven’t even factored in the most significant of all consumer considerations – risk and how to avoid it. Think how complex things would get in our brains if we were buying a new car!

Buying chocolate chip cookies might seem like a no brainer – because – well – it almost is. Beyond dosing our neural pathways with dopamine, our brains barely kick in when considering whether to grab a bag of Chips Ahoy on our next trip to the store. In fact, the last thing you want your brain to do when you’re craving chewy chocolate is to kick in. Then you would start considering things like caloric intake and how you should be cutting down on processed sugar. Chocolate chip cookies might be a no-brainer, but almost nothing else in the consumer world is that simple.

Marketing is relying more and more on data. But data is typically restricted to answering “who”, “what”, “when” and “where” questions. It’s studies like the ones I shared here that start to pick apart the “why” of marketing.

And when things get complex, asking “why” is exactly what we need to do.

Making Time for Quadrant Two

Several years ago, I read Stephen Covey’s “The 7 Habits of Highly Effective People.” It had a lasting impact on me. Through my life, I have found myself relearning those lessons over and over again.

One of them was the four quadrants of time management. How we spend our time in these quadrants determines how effective we are.

 Imagine a box split into four quarters. On the upper left box, we’ll put a label: “Important and Urgent.” Next to it, in the upper right, we’ll put a label saying “Important But Not Urgent.” The label for the lower left is “Urgent but Not Important.” And the last quadrant — in the lower right — is labeled “Not Important nor Urgent.”

The upper left quadrant — “Important and Urgent” — is our firefighting quadrant. It’s the stuff that is critical and can’t be put off, the emergencies in our life.

We’ll skip over quadrant two — “Important But Not Urgent” — for a moment and come back to it.

In quadrant three — “Urgent But Not Important” — are the interruptions that other people brings to us. These are the times we should say, “That sounds like a you problem, not a me problem.”

Quadrant four is where we unwind and relax, occupying our minds with nothing at all in order to give our brains and body a chance to recharge. Bingeing Netflix, scrolling through Facebook or playing a game on our phones all fall into this quadrant.

And finally, let’s go back to quadrant two: “Important But Not Urgent.” This is the key quadrant. It’s here where long-term planning and strategy live. This is where we can see the big picture.

The secret of effective time management is finding ways to shift time spent from all the other quadrants into quadrant two. It’s managing and delegating emergencies from quadrant one, so we spend less time fire-fighting. It’s prioritizing our time above the emergencies of others, so we minimize interruptions in quadrant three. And it’s keeping just enough time in quadrant four to minimize stress and keep from being overwhelmed.

The lesson of the four quadrants came back to me when I was listening to an interview with Dr. Sandro Galea, epidemiologist and author of “The Contagion Next Time.” Dr. Galea was talking about how our health care system responded to the COVID pandemic. The entire system was suddenly forced into quadrant one. It was in crisis mode, trying desperately to keep from crashing. Galea reminded us that we were forced into this mode, despite there being hundreds of lengthy reports from previous pandemics — notably the SARS crisis–– containing thousands of suggestions that could have helped to partially mitigate the impact of COVID.

Few of those suggestions were ever implemented. Our health care system, Galea noted, tends to continually lurch back and forth within quadrant one, veering from crisis to crisis. When a crisis is over, rather than go to quadrant two and make the changes necessary to avoid similar catastrophes in the future, we put the inevitable reports on a shelf where they’re ignored until it is — once again — too late.

For me, that paralleled a theme I have talked about often in the past — how we tend to avoid grappling with complexity. Quadrant two stuff is, inevitably, complex in nature. The quadrant is jammed with what we call wicked problems. In a previous column, I described these as, “complex, dynamic problems that defy black-and-white solutions. These are questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough — for now.’”

That’s quadrant two in a nutshell. Quadrant-one problems must be triaged into a sort of false clarity. You have to deal with the critical stuff first. The nuances and complexity are, by necessity, ignored. That all gets pushed to quadrant two, where we say we will deal with it “someday.”

Of course, someday never comes. We either stay in quadrant one, are hijacked into quadrant three, or collapse through sheer burn-out into quadrant four. The stuff that waits for us in quadrant two is just too daunting to even consider tackling.

This has direct implications for technology and every aspect of the online world. Our industry, because of its hyper-compressed timelines and the huge dollars at stake, seems firmly lodged in the urgency of quadrant one. Everything on our to-do list tends to be a fire we have to put out. And that’s true even if we only consider the things we intentionally plan for. When we factor in the unplanned emergencies, quadrant one is a time-sucking vortex that leaves nothing for any of the other quadrants.

But there is a seemingly infinite number of quadrant two things we should be thinking about. Take social media and privacy, for example. When an online platform has a massive data breach, that is a classic quadrant one catastrophe. It’s all hands on deck to deal with the crisis. But all the complex questions around what our privacy might look like in a data-inundated world falls into quadrant two. As such, they are things we don’t think much about. It’s important, but it’s not urgent.

Quadrant two thinking is systemic thinking, long-term and far-reaching. It allows us to build the foundations that helps to mitigate crisis and minimize unintended consequences.

In a world that seems to rush from fire to fire, it is this type of thinking that could save our asses.

The News Cycle, Our Attention Span and that Oscar Slap

If your social media feed is like mine, it was burning up this Monday with the slap heard around the world. Was Will Smith displaying toxic masculinity? Was “it was a joke” sufficient defence for Chris Rock’s staggering lack of ability to read the room? Was Smith’s acceptance speech legendary or just really, really lame?

More than a few people just sighed and chalked it up as another scandal up for the beleaguered awards show. This was one post I saw from a friend on Facebook, “People smiling and applauding as if an assault never happened is probably Hollywood in a nutshell.”

Whatever your opinion, the world was fascinated by what happened. The slap trended number one on Twitter through Sunday night and Monday morning. On CNN, the top trending stories on Monday morning were all about the “slap.” You would have thought that there was nothing happening in the world that was more important than one person slapping another. Not the world teetering on the edge of a potential world war. Not a global economy that can’t seem to get itself in gear. Not a worldwide pandemic that just won’t go away and has just pushed Shanghai – a city of 30 million – back into a total lock down.

And the spectre of an onrushing climactic disaster? Nary a peep in Monday’s news cycle.

We commonly acknowledge – when we do take the time to stop and think about it – that our news cycles have about the same attention span as a 4-year-old on Christmas morning. No matter what we have in our hands, there’s always something brighter and shinier waiting for us under the tree. We typically attribute this to the declining state of journalism. But we – the consumers of news – are the ones that continually ignore the stories that matter in favour of gossipy tidbits.

This is just the latest example of that. It is nothing more than human nature. But there is a troubling trend here that is being accelerated by the impact of social media. This is definitely something we should pay attention to.

The Confounding Nature of Complexity

Just last week, I talked about something psychologists call a locus of control. Essentially it is defined by the amount of control you feel you have over your life. In times of stress, unpredictability or upheaval, our own perceived span of control tends to narrow to the things we have confidence we can manage. Our ability to cope draws inward, essentially circling the wagons around the last vestiges of our capability to direct our own circumstances. 

I believe the same is true with our ability to focus attention. The more complex the world gets, the more we tend to focus on things that we can easily wrap our minds around. It has been shown repeatedly that anxiety impacts the ability of our brain to focus on things. A study from Finland’s Abo Akademi University showed that anxiety reduces the ability of the brain to focus on tasks. It eats away at our working memory, leaving us with a reduced capacity to integrate concepts and work things out. Complex, unpredictable situations natural raise our level of anxiety, leading us to retreat to things we don’t have to work too hard to understand.

The irony here is the more we are aware of complex and threatening news stories, the more we go right past them to things like the Smith-Rock story. It’s like catnip to a brain that’s trying to retreat from the real news because we can’t cope with it.

This isn’t necessarily the fault of journalism, it’s more a limitation of our own brains. On Monday morning, CNN offered plenty of coverage dealing with the new airstrikes in Ukraine, Biden’s inflammatory remarks about Putin, Trump’s attempts to block Congress from counting votes and the restriction of LGBTQ awareness in the classrooms of Florida. But none of those stories were trending. What was trending were three stories about Rock and Smith, one about the Oscar winners and another about a 1600-pound shark. That’s what we were collectively reading.

False Familiarity

It’s not just that the news is too complex for us to handle that made the Rock/Smith story so compelling. Our built-in social instincts also made it irresistible.

Evolution has equipped us with a highly attuned social antennae. Humans are herders and when you travel in a herd, your ability to survive is highly dependent on picking up signals from your fellow herders. We have highly evolved instincts to help us determine who we can trust and who we should protect ourselves from. We are quick to judge others, and even quicker to gossip about behavior that steps over those invisible boundaries we call social norms.

For generations, these instincts were essential when we had keep tabs on the people closest to us. But with the rise of celebrity culture in the last century, we now apply those same instincts to people we think we know. We pass judgement on the faces we see on TV and in social media. We have a voracious appetite for gossip about the super-rich and the super famous.

Those foibles may be ours and ours alone, but they not helped by the fact that certain celebrities – namely one Mr. Smith – feels compelled to share way too much about himself with the public at large. Witness his long and tear-laden acceptance speech. Even though I have only a passing interest in the comings and goings of Will and Jada, I know more about their sex lives than that of my closest friends. The social norm that restricts bedroom talk amongst our friends and family is not there with the celebrities we follow. We salivate over salacious details.

No Foul, No Harm?

That’s the one-two punch (sorry, I had to go there) that made the little Oscar ruckus such a hot news item. But what’s the harm? It’s just a momentary distraction for the never-ending shit-storm that defines our daily existence, right?

Not quite.

The more we continually take the path of least resistance in our pursuit of information, the harder it becomes for us to process the complex concepts that make up our reality. When that happens, we tend to attribute too much importance and meaning to these easily digestible nuggets of gossip. As we try to understand complex situations (which covers pretty much everything of importance in our world today) we start relying too much on cognitive short cuts like availability bias and representative bias. In the first case, we apply whatever information we have at hand to every situation and in the second we resort to substituting stereotypes and easy labels in place of trying to understand the reality of an individual or group.

Ironically, it’s exactly this tendency towards cognitive laziness that was skewered in one of Sunday night’s nominated features, Adam McKay’s Don’t Look Up.

Of course, it was ignored. As Will Smith said, sometimes, “art imitates life.”

Pursuing a Plastic Perfection

“Within every dystopia, there’s a little utopia”

— novelist Margaret Atwood

We’re a little obsessed with perfection. For myself, this has taken the form of a lifelong crush on Mary Poppins (Julie Andrews from the 1964 movie), who is “practically perfect in every way.”

We’ve been seeking perfection for some time now. The idea of creating Utopia, a place where everything is perfect, has been with us since the Garden of Eden. As humans have trodden down our timeline, we have been desperately seeking mythical Utopias, then religious ones, which then led to ideological ones.

Some time at the beginning of the last century, we started turning to technology and science for perfection. Then, in the middle of the 20th century, we abruptly swung the other way, veering towards Dystopia while fearing that technology would take us to the dark side, a la George Orwell’s “1984” and Aldous Huxley’s “Brave New World.”

Lately, other than futurist Ray Kurzweil and the starry-eyed engineers of Silicon Valley, I think it’s fair to say that most of us have accepted that technology is probably a mixed bag at best: some good and some bad. Hopefully, when the intended consequences are tallied with the unintended ones, we net out a little to the positive. But we can all agree that it’s a long way from perfection.

This quest for perfection is taking some bizarre twists. Ultimately, it comes down to what we feel we can control, focusing our lives on the thinnest of experiences: that handful of seconds that someone pays attention to our social media posts.

It’s a common psychological reaction: the more we feel that our fate is beyond our control, the more we obsess about those things we feel we can control. And on social media, if we can’t control our world, our country, our town or even our own lives, perhaps our locus of control becomes narrowed to the point where the only thing left is our own appearance.

This effect is exacerbated by our cultural obsession with physical attractiveness. Beauty may only be skin deep, but in our world, it seems to count for everything that matters. Especially on Snapchat and Instagram.

And where there’s a need, there is a technological way. Snapchat filters that offer digitally altered perfection have proliferated. One is Facetune 2,  a retouching app that takes your selfie and adjusts lighting, removes blemishes, whitens teeth and nudges you closer and closer to perfection.

In one blog post, the Facetune team, inspired by Paris Hilton, encourages you to start “sliving.” Not sure what the hell “sliving” is? Apparently, it’s a combination of “slaying it” and “living your best life.” It’s an updated version of “that’s hot” for a new audience.

Of course, it doesn’t hurt if you happen to look like Ms. Hilton or Kim Kardashian. The post assures us that it’s not all about appearance. Apparently, “owning it” and “being kind to yourself” are also among the steps to better “sliving.” But as you read down the post, it does ultimately come back to how you look, reinforced with this pearl of wisdom: “a true sliv’ is also going to look their absolute best when it counts”

And if that sounds about as deep as Saran Wrap, what do you expect when you turn to Paris Hilton for your philosophy of life? Plato she’s not.

Other social filter apps go even farther, essentially altering your picture until it’s no longer recognizable. Bulges are gone, to be replaced by chiseled torsos and optimally rounded butts. Cheeks are digitally sucked in and noses are planed to perfection. Eyes sparkle and teeth gleam. The end product? Sure, it looks amazing. It’s just not you anymore.

With all this pressure put on having a perfect appearance, it’s little wonder that it’s royally messing with our heads (what’s inside the head, not the outside). Hence the new disease of Snapchat Dysmorphia. I wish it were harder to believe in this syndrome — but it’s when people, many of them young girls, book a consultation with a plastic surgeon, wanting to look exactly like the result of their filtered Snapchat selfies.

According to one academic article, one in 50 Americans suffers from body dysmorphic disorder, where sufferers

“are preoccupied with at least one nonexistent or slight defect in physical appearance. This can lead them to think about the defect for at least one hour a day, therefore impacting their social, occupational, and other levels of functioning. The individual also should have repetitive and compulsive behaviors due to concerns arising from their appearances. This includes mirror checking and reassurance seeking among others.”

There’s nothing wrong with wanting perfection. As the old saying goes, it might be the enemy of good, but it can be a catalyst for better. We just have to go on knowing that perfection is never going to be attainable.

But social media is selling us a bogus bill of goods: The idea that perfect is possible and that everyone but us has figured it out.  

The Canary in the Casino

It may not seem like it if you’ve watched the news lately, but there are signs we’re balanced on the edge of a gigantic party. We’re all ready to treat ourselves with a little hedonistic indulging.

As I mentioned in a previous column (rerun last week), physician, epidemiologist and sociologist Nicholas Christakis predicted this behavior, but not for a few years yet. Christakis predicted a sort of global “letting loose” starting some time in 2024:

“What typically happens is people get less religious. They will relentlessly seek out social interactions in nightclubs and restaurants and sporting events and political rallies. There’ll be some sexual licentiousness. People will start spending their money after having saved it. They’ll be joie de vivre and a kind of risk-taking, a kind of efflorescence of the arts, I think.”

So, there is light at the end of the pandemic tunnel — but, according to a report just out from the American Gaming Association, some of us can’t wait a couple of years. First out of the gate were gamblers. Well before we started emerging from the pandemic, they were already rolling the dice and starting the party.

According to the report from the AGA, U.S. commercial gaming revenue hit a record $53 billion in 2021. That was more than 21% higher than the previous record, set in 2019, and a huge rebound of 77% from 2020 numbers, when COVID forced casinos to shut down for months at a time.

You might think online gaming accounts for the jump, but you’d be wrong. In-casino gambling underpins this huge spike, accounting for $45.6 billion of the $53 billion total. People were saying to hell with health mandates and streaming into casinos across the country, with most of the top markets seeing significant gains from pre-COVID 2019.

While some of us might not be ready to ditch the masks and belly up to the bar, I suspect these gamblers are an early indicator of things to come. Call them a canary in a coal mine, if you will.

Because I can’t resist interesting historical tidbits, I thought I’d share the story behind this saying about how canaries ended up in coal mines in the first place. Early in the last century, canaries were used as an early warning system for poison gas in England. John Scott Haldane, who was researching the effects of carbon monoxide on humans, suggested using canaries as a “sentinel species,” an animal more sensitive to the impact of poisonous gases. They were kept in cages throughout the mines — and if a canary died, the miners were warned to evacuate the mine.

But why canaries?

Canaries, like most birds, need tremendous amounts of oxygen to fly and to avoid altitude sickness. They actually take in oxygen twice on each breath, once while inhaling and again when exhaling. This, combined with their relatively small size, make them hyper-sensitive to the impact of a poisonous gas. Also, canaries were easy to come by in England and convenient to transport. So, they were recruited to help keep humans alive.

This makes them analogous to gamblers in the following way: Gamblers, by their nature, are built to be more willing to take some risk in search of a reward. You could say they are hyper-sensitive to the rush that comes from rewarding themselves. As such, they are the early adopters in the onrushing desire to put bad news behind them and let loose with a little hedonistic hell-raising. They are not atypical; they’re just ahead of the curve in this one respect.

Sooner or later, the rest of us will follow. Look for similar huge rebounds in the travel and hospitality sectors, entertainment, events and other industries focused on providing pleasure. The world will become one giant spring break party.

Which is perhaps only fitting, coming after a two-year-long winter of our discontent.

The Joe Rogan Experiment in Ethical Consumerism

We are watching an experiment in ethical consumerism take place in real time. I’m speaking of the Joe Rogan/Neil Young controversy that’s happening on Spotify. I’m sure you’ve heard of it, but if not, Canadian musical legend Neil Young had finally had enough of Joe Rogan’s spreading of COVID misinformation on his podcast, “The Joe Rogan Experience.” He gave Spotify an ultimatum: “You can have Rogan or Young. Not both.”

Spotify chose Rogan. Young pulled his library. Since then, a handful of other artists have followed Young, including former band mates David Crosby, Stephen Stills and Graham Nash, along with fellow Canuck Hall of Famer Joni Mitchell.

But it has hardly been a stampede. One of the reasons is that — if you’re an artist — leaving Spotify is easier said than done. In an interview with Rolling Stone, Rosanne Cash said most artists don’t have the luxury of jilting Spotify: 

It’s not viable for most artists. The public doesn’t understand the complexities. I’m not the sole rights holder to my work… It’s not only that a lot of people who aren’t rights holders can’t remove their work. A lot of people don’t want to. These are the digital platforms where they make a living, as paltry as it is. That’s the game. These platforms own, what, 40 percent of the market share?”

Cash also brings up a fundamental issue with capitalism: it follows profit, and it’s consumers who determine what’s profitable. Consumers make decisions based on self-interest: what’s in it for them. Corporations use that predictable behavior to make the biggest profit possible. That behavior has been perfectly predictable for hundreds of years. It’s the driving force behind Adam Smith’s Invisible Hand. It was also succinctly laid out by economist Milton Friedman in 1970:

“There is one and only one social responsibility of business–to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.”

We all want corporations to be warm and fuzzy — but it’s like wishing a shark were a teddy bear. It just ain’t gonna happen.

One who indulged in this wishful thinking was a little less well-known Canadian artist who also pulled his music  from Spotify, Ontario singer/songwriter Danny Michel. He told the CBC:

“But for me, what it was was seeing how Spotify chose to react to Neil Young’s request, which was, you know: You can have my music or Joe. And it seems like they just, you know, got out a calculator, did some math, and chose to let Neil Young go. And they said, clear and loud: We don’t need you. We don’t need your music.”

Well, yes, Danny, I’m pretty sure that’s exactly what Spotify did. It made a decision based on profit. For one thing, Joe Rogan is exclusive to Spotify. Neil Young isn’t. And Rogan produces a podcast, which can have sponsors. Neil Young’s catalog of songs can’t be brought to you by anyone.

That makes Rogan a much better bet for revenue generation. That’s why Spotify paid Rogan $100 million. Music journalist Ted Gioia made the business case for the Rogan deal pretty clear in a tweet

“A musician would need to generate 23 billion streams on Spotify to earn what they’re paying Joe Rogan for his podcast rights (assuming a typical $.00437 payout per stream). In other words, Spotify values Rogan more than any musician in the history of the world.”

I hate to admit that Milton Friedman is right, but he is. I’ve said it time and time before, to expect corporations to put ethics ahead of profits is to ignore the DNA of a corporation. Spotify is doing what corporations will always do, strive to be profitable. The decision between Rogan and Young was done with a calculator. And for Danny Michel to expect anything else from Spotify is simply naïve. If we’re going to play this ethical capitalism game, we must realize what the rules of engagement are.

But what about us? Are we any better that the corporations we keep putting our faith in?

We have talked about how we consumers want to trust the brands we deal with, but when a corporation drops the ethics ball, do we really care? We have been gnashing our teeth about Facebook’s many, many indiscretions for years now, but how many of us have quite Facebook? I know I haven’t.

I’ve seen some social media buzz about migrating from Spotify to another service. I personally have started down this road. Part of it is because I agree with Young’s stand. But I’ll be brutally honest here. The bigger reason is that I’m old and I want to be able to continue to listen to the Young, Mitchell and CSNY catalogs. As one of my contemporaries said in a recent post, “Neil Young and Joni Mitchell? Wish it were artists who are _younger_ than me.”

A lot of pressure is put on companies to be ethical, with no real monetary reasons why they should be. If we want ethics from our corporations, we have to make it important enough to us to impact our own buying decisions. And we aren’t doing that — not in any meaningful way.

I’ve used this example before, but it bears repeating. We all know how truly awful and unethical caged egg production is. The birds are kept in what is known as a battery cage holding 5 to 10 birds and each is confined to a space of about 67 square inches. To help you visualize that, it’s just a bit bigger than a standard piece of paper folded in half. This is the hell we inflict on other animals solely for our own gain. No one can be for this. Yet 97% of us buy these eggs, just because they’re cheaper.

If we’re looking for ethics, we have to look in other places than brands. And — much as I wish it were different — we have to look beyond consumers as well. We have proven time and again that our convenience and our own self-interest will always come ahead of ethics. We might wish that were different, but our spending patterns say otherwise.

Don’t Be Too Quick To Dismiss The Metaverse

According to my fellow Media Insider Maarten Albarda, the metaverse is just another in a long line of bright shiny objects that — while promising to change the world of marketing — will probably end up on the giant waste heap of overhyped technologies.

And if we restrict Maarten’s caution to specifically the metaverse and its impact on marketing, perhaps he’s right. But I think this might be a case of not seeing the forest for the trees.

Maarten lists a number of other things that were supposed to revolutionize our lives: Clubhouse, AI, virtual reality, Second Life. All seemed to amount to much ado about nothing.

But as I said almost 10 years ago, when I first started talking about one of those overhyped examples, Google Glass — and what would eventually become the “metaverse” (in rereading this, perhaps I’m better at predictions than I thought)  — the overall direction of these technologies do mark a fundamental shift:

“Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the ‘cloud.’ It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections.”

As Wired founder and former executive editor Kevin Kelly has told us, technology knows what it wants. Eventually, it gets it. Sooner or later, all these things are bumping up against a threshold that will mark a fundamental shift in how we live.

You may call this the long awaited “singularity” or not. Regardless, it does represent a shift from technology being a tool we use consciously to enhance our experiences, to technology being so seamlessly entwined with our reality that it alters our experiences without us even being aware of it. We’re well down this path now, but the next decade will move us substantially further, beyond the point of no return.

And that will impact everything, including marketing.

What is interesting is the layer technology is building over the real world, hence the term “meta.” It’s a layer of data and artificial intelligence that will fundamentally alter our interactions with that world. It’s technology that we may not use intentionally — or, beyond the thin layer of whatever interface we use, may not even be aware of.

This is what makes it so different from what has come before. I can think of no technical advance in the past that is so consequential to us personally yet functions beyond the range of our conscious awareness or deliberate usage. The eventual game-changer might not be the metaverse. But a change is coming, and the metaverse is a signal of that.

Technology advancing is like the tide coming in. If you watch the individual waves coming in, they don’t seem to amount to much. One stretches a little higher than the last, followed by another that fizzles out at the shoreline. But cumulatively, they change the landscape — forever. This tide is shifting humankind’s relationship with technology. And there will be no going back.

Maybe Maarten is right. Maybe the metaverse will turn out to be a big nothingburger. But perhaps, just perhaps, the metaverse might be the Antonio Meucci  of our time: an example where the technology was inevitable, but the timing wasn’t quite right.

Meucci was an Italian immigrant who started working on the design of a workable telephone in 1849, a full two decades before Alexander Graham Bell even started experimenting with the concept.  Meucci filed a patent caveat in 1871, five years before Bell’s patent application was filed, but was destitute and didn’t have the money to renew it.  His wave of technological disruption may have hit the shore a little too early, but that didn’t diminish the significance of the telephone, which today is generally considered one of the most important inventions  of all time in terms of its impact on humanity.

Whatever is coming, and whether or not the metaverse represents the sea change catalyst that alters everything, I fully expect at some point in the very near future to pinpoint this time as the dawn of the technological shift that made the introduction of the telephone seem trivial in comparison.