Finding the Space to Overthink

I’ve always been intrigued by Michael Keaton’s choices. I think he’s the most underrated actor of his generation. He, like many others of his era, went through the 80’s and 90’s entertainment hit mill. He cranked out stuff like Mr. Mom and Beetlejuice. He rebooted the Batman franchise with Tim Burton. He was everywhere – and it was getting to be too much. As he recounted in a 2017 interview with the Hollywood Reporter, he was “getting tired of hearing my own voice, feeling like I was kinda pulling out tricks, probably being lazy, probably being not particularly interested.”

So he retreated from the industry, back to his ranch in Montana. And, as typically happens when someone turns their back on Hollywood, Hollywood reciprocated in kind, ““I had a life. And also not a whole lot of folks knocking on my door.”

He used the space he had, created both by his choices and the unanticipated consequences of those choices, to think about what he wanted to do – on his own terms, ““I started getting really, really locked in and narrowing the focus and narrowing the energy and narrowing the vision and honing it and really thinking about what I wanted to do.” 

For the last 10 years, that time taken to regroup has resulted in Keaton’s best work: Birdman, Spotlight, The Founder, The Trial of the Chicago Seven, Dopesick and Worth. It’s a string that comes from a conscious decision to focus on work that means something. In a more recent interview with the Hollywood Reporter, he explains, ““Without sounding really pretentious … I have a job that might actually change something, or at least make people think about something, or feel something.”

It’s the perspective of a mature mind that realizes that time, effort and energy are no longer in endless supply. They should be expended on something that matters. I also believe it’s a perspective that came from a lot of thinking in a place that allowed for expansive thoughts that wouldn’t get interrupted.

A place like Montana.

Keaton admits he probably overthinks things, “Probably because I’m too frightened, I’m incapable of phoning anything in.” Maybe a ranch in Montana is the place you need to be to overthink things and circle around a hunch endlessly until you finally are able to nail it in place.

I don’t think this is a bad thing. I believe in quality over quantity. I also believe the world is tilting in the other direction. The soundtrack of our lives is a clock ticking loudly. We are constantly driven to produce. There isn’t a lot of time left over to just think. And if we keep pushing away our opportunities to just think; to absorb and ruminate and mull things over in our minds, we’ll lose the ability to do that.

For myself, I had my own taste of this in my career. For various reasons, which were all personal, I chose to keep the company I founded headquartered in Kelowna, a small city in the interior of British Columbia. In doing so, I’m sure I restricted our growth. Most of our clients were at least 3 hours and at least one transfer away by plane. I would attend conferences in New York, Chicago or San Francisco and come back to Kelowna feeling like I was trapped in a backwater, far from the mainstream of digital marketing. When I was home, I couldn’t grab a coffee with anyone outside our company who had a similar professional experience to me.

But I also believe this gave me the time to “over think” things. And good things came from that. We conducted a ton of research that attempted to uncover why people did what they did online, especially on search engines. We discovered that technology changes quickly, but people don’t. For us, user behavior became our lodestone in the strategies we created for our customers, constantly pointing us in the right direction. This was especially helpful when we started picking apart the tangled knot that is B2B buying.

I have always been proud of the work we were able to do. I believe it did matter. And I’m not sure all of that would have happened if we didn’t have the space to think – even over think. I believe more people have to find this space.

They have to find their own Kelowna. Or Montana.

Older? Sure. Wiser? Debatable.

I’ve always appreciated Mediapost Editor-in-Chief Joe Mandese’s take on things. It’s usually snarky, cynical and sarcastic, all things which are firmly in my wheelhouse. He also says things I may think but wouldn’t say for the sake of political politeness.

So when Joe gets a full head of steam up, as he did in that recent post which was entitled “Peak Idiocracy?”, I set aside some time to read it. I can vicariously fling aside my Canadian reticence and enjoy a generous helping of Mandesian snarkiness. In this case, the post was a recap of Mediapost’s 2023 Marketing Politics Conference – and the depths that political advertising is sinking to in order to appeal to younger demographics. Without stealing Joe’s thunder (please read the post if you haven’t) one example involved Tiktok and mouth mash-up filters. After the panel where this case study surfaced, Joe posed a question to the panelists.

“If this is how we are electing our representative leaders, do you feel like we’ve reached peak idiocracy in the sense that we are using mouth filters and Harry Potter memes to get their messages across?”

As Joe said, it was an “old guy question.” More than that, it was a cynical, smart, sarcastic old guy question. But the fact remains, it was an old guy question. One of the panelists, DGA Digital Director Laura Carlson responded:

“I don’t think we should discount young voters’ intelligence. I think being able to have fun with the news and have fun with politics and enjoy TikTok and enjoy the platform while also engaging with issues you care about is something I wouldn’t look down on. And I think more of it is better.”

There’s something to this. Maybe a lot to this.

First, I think we have fundamentally different idea of “messaging” from generation to generation. Our generation (technically I’m a Boomer, but the label Generation Jones is a better fit) grew up with the idea that information, whether it be on TV, newspaper, magazine or radio, was delivered as a complete package. There was a scarcity of information, and this bundling of curated information was our only choice for being informed.

That’s not the case for a generation raised with the Internet and social media. Becoming aware and being informed are often decoupled. In an environment jammed with information of all types – good and bad – Information foraging strategies have had to evolve. Now, you have to somehow pierce the information filters we have all put in place in order to spark awareness. If you are successful in doing that and can generate some curiosity, you have umpteen million sources just a few keystrokes away where you can become informed.

Still, we “old guys” (and “old gals” – for the sake of consistency, I’ll use the masculine label, but I mean it in the gender-neutral way) do have a valid perspective that shouldn’t be dismissed as us just being old and grumpy. We’ve been around long enough to see how actions and consequences are correlated. We’ve seen how seemingly trivial trends can have lasting impacts, both good and bad. There is experience here that can prove instructive.

But we also must appreciate that those a few generations behind us have built their own cognitive strategies to deal with information that are probably a better match for the media environment we live in today.

So let me pose a different question. If only one generation could vote, and if everyone’s future depended on that vote, which generation would you choose to give the ballots to? Pew Research did a generational breakdown on awareness of social issues and for me, the answer is clear. I would far rather put my future in the hands of Gen Z and Millennials than in the hands of my own generation. They are more socially aware, more compassionate, more committed to solving our many existential problems and more willing to hold our governments accountable.

So, yes, political advertising might be dumbed down to TikTok level for these younger voters, but they understand how the social media game is played. I think they are savvy enough to know that a TikTok mash up is not something to build a political ideology on. They accept it for what it is, a brazen attempt to scream just a little louder than the competition for their attention; standing out from the cacophony of media intrusiveness that engulfs them. If it has to be silly to do that, so be it.

Sure, the generation of Joe Mandese and myself grew up with “real” journalism: the nightly news with Dan Rather and Tom Brokaw, 60 Minutes, The MacNeil/Lehrer Report, the New York Times, The Washington Post. We were weaned on political debates that dealt with real issues.

And for all that, our generation still put Trump in the White House. So much for the wisdom of “old guys.”

The Eternal Hatred of Interruptive Messages

Spamming and Phishing and Robocalls at Midnight
Pop ups and Autoplays and LinkedIn Requests from Salespeople

These are a few of my least favorite things

We all feel the excruciating pain of unsolicited demands on our attention. In a study of the 50 most annoying things in life of 2000 Brits by online security firm Kapersky, deleting spam email came in at number 4, behind scrubbing the bath, being trapped in voicemail hell and cleaning the oven.

Based on this study, cleanliness is actually next to spamminess.

Granted, Kapersky is a tech security firm so the results are probably biased to the digital side, but for me the results check out. As I ran down the list, I hated all the same things that were listed.

In the same study, Robocalls came in at number 10. Personally, that tops my list, especially phishing robocalls. I hate – hate – hate rushing to my phone only to hear that the IRS is going to prosecute me unless I immediately push 7 on my touchtone phone keyboard.

One, I’m Canadian. Two, go to Hell.

I spend more and more of my life trying to avoid marketers and scammers (the line between the two is often fuzzy) trying desperately to get my attention by any means possible. And it’s only going to get worse. A study just out showed that the ChatGPT AI chatbot could be a game changer for phishing, making scam emails harder to detect. And with Google’s Gmail filters already trapping 100 million phishing emails a day, that is not good news.

The marketers in my audience are probably outrunning Usain Bolt in their dash to distance themselves from spammers, but interruptive demands on our attention are on a spectrum that all share the same baseline. Any demand on our attention that we don’t ask for will annoy us. The only difference is the degree of annoyance.

Let’s look at the psychological mechanisms behind that annoyance.

There is a direct link between the parts of our brain that govern the focusing of attention and the parts that regulate our emotions. At its best, it’s called “flow” – a term coined by Mihaly Csikszentmihaly that describes a sense of full engagement and purpose. At its worst, it’s a feeling of anger and anxiety when we’re unwilling dragged away from the task at hand.

In a 2017 neurological study by Rejer and Jankowski, they found that when a participant’s cognitive processing of a task was interrupted by online ads, activity in the frontal and prefrontal cortex simply shut down while other parts of the brain significantly shifted activity, indicating a loss of focus and a downward slide in emotions.

Another study, by Edwards, Li and Lee, points the finger at something called Reactance Theory as a possible explanation. Very simply put, when something interrupts us, we perceive a loss of freedom to act as we wish and a loss of control of our environment. Again, we respond by getting angry.

It’s important to note that this negative emotional burden applies to any interruption that derails what we intend to do. It is not specific to advertising, but a lot of advertising falls into that category. It’s the nature of the interruption and our mental engagement with the task that determine the degree of negative emotion.

Take skimming through a news website, for instance. We are there to forage for information. We are not actively engaged in any specific task. And so being interrupted by an ad while in this frame of mind is minimally irritating.

But let’s imagine that a headline catches our attention, and we click to find out more. Suddenly, we’re interrupted by a pop-up or pre-roll video ad that hijacks our attention, forcing us to pause our intention and focus on irrelevant information. Our level of annoyance begins to rise quickly.

Robocalls fall into a different category of annoyance for many reasons. First, we have a conditioned response to phone calls where we hope to be rewarded by hearing from someone we know and care about. That’s what makes it so difficult to ignore a ringing phone.

Secondly, phone calls are extremely interruptive. We must literally drop whatever we’re doing to pick up a phone. When we go to all this effort only to realize we’ve been duped by an unsolicited and irrelevant call, the “red mist” starts to float over us.

You’ll note that – up to this point – I haven’t even dealt with the nature of the message. This has all been focused on the delivery of the message, which immediately puts us in a more negative mood. It doesn’t matter whether the message is about a service special for our vehicle, an opportunity to buy term life insurance or an attempt by a fictitious Nigerian prince to lighten the load of our bank account by several thousand dollars; whatever the message, we start in an irritated state simply due to the nature of the interruption.

Of course, the more nefarious the message that’s delivered, the more negative our emotional response will be. And this has a doubling down effect on any form of intrusive advertising. We learn to associate the delivery mechanism with attempts to defraud us. Any politician that depends on robocalls to raise awareness on the day before an election should ponder their ad-delivery mechanism.

Good News and Bad News about Black Swans

First, the good news. According to a new study we may be able to predict extreme catastrophic events such as earthquakes, tsunamis, massive wildfires and pandemics through machine learning and neural networks.

The problem with these “black swan” type of events (events that are very rare but have extreme consequences) is that there isn’t a lot of data that exists that we can use to predict them. The technical term for these is a “stochastic” event – they are random and are, by definition, very difficult to forecast.

Until now. According to the study’s lead author, George Karniadakis, the researchers may have found a way to give us a heads up by using machine learning to make the most out of the meagre data we do have. “The thrust is not to take every possible data and put it into the system, but to proactively look for events that will signify the rare events,” Karniadakis says. “We may not have many examples of the real event, but we may have those precursors. Through mathematics, we identify them, which together with real events will help us to train this data-hungry operator.”

This means that this science could potentially save thousands – or millions – of lives.

But – and now comes the bad news – we have to listen to it. And we have a horrible track record of doing that.  Let’s take just one black swan – COVID 19. Remember that?

Justsecurity.org is a “online forum for the rigorous analysis of security, democracy, foreign policy, and rights.” In other words, it’s their job to minimize the impact of black swans. And they put together a timeline of the US response to the COVID 19 Pandemic. Now that we know the consequences, it’s a terrifying and maddening read. Without getting into the details, it was months before the US federal government took substantive action against the pandemic, despite repeated alerts from healthcare officials and scientists. This put the U.S. behind pretty much the entire developed world in terms of minimizing the impact of the pandemic and saving lives. All the bells, whistles and sirens were screaming at full volume, but no one wanted to listen.

Why? Because there has been a systemic breakdown in what we call epistemic trust – trust in new information coming to us from what should be a trustworthy and relevant source.

I’ll look at this breakdown on two fronts – trust in government and trust in science. These two things should work together, but all too often they don’t. That was especially true in the Trump administration’s handling of the COVID 19 Pandemic.

Let’s start with trust in government. Based on a recent study across 22 countries by the OECD, on average only about half the citizens trust their government. Trust is highest in countries like Finland, Norway and Luxembourg (where only 20 to 30% of the citizens don’t trust their government) and lowest in countries such as Colombia, Latvia and Austria (where over 60% of citizens have no trust in their government).

You might notice I didn’t mention the U.S. That’s because they weren’t included in the study. But the PEW Research Center has been tracking trust in government since 1958, so let’s look at that.

The erosion of trust in the US federal government started with Lyndon Johnson, with trust in government plummeting with Nixon and Watergate. Interestingly, although separated by ideology, both Republicans and Democrats track similarly when you look at erosion of trust from Nixon through George W. Bush, with the exception being Ronald Reagan. That started to break down with Obama and started to polarize even more with Trump and Biden. Since then, the trends started going in opposite directions, but the overall trend has still been towards lower trust.

Now, let’s look at trust in science. While not as drastic as the decline of trust in government, PEW found that trust in science has also declined, especially in the last few years. Since 2020, the percentage of Americans who had no trust in science had almost doubled, from 12% in April 2020 to 22% in December, 2021.

It’s not that the science got worse in those 20 months. It’s that we didn’t want to hear what the science was telling us. The thing about epistemic trust – our willingness to trust trustworthy information – is that it varies depending on what mood we’re in. The higher our stress level, the less likely we are to accept good information at face value, especially if what it’s trying to tell us will only increase our level of stress.

Inputting new information that disrupts our system of beliefs is hard work under any circumstances. It taxes the brain. And if our brain is already overtaxed, it protects itself by locking the doors and windows that new information may sneak through and doubling down on our existing beliefs. This is what psychologists call Confirmation Bias. We only accept new information if it matches what we already believe. This is doubly true if the new information is not something we really want to hear.

The only thing that may cause us to question our beliefs is a niggling doubt, caused by information that doesn’t fit with our beliefs. But we will go out of our way to find information that does conform to our beliefs so we can ignore the information that doesn’t fit, no matter how trustworthy its source.  The explosion of misinformation that has happened on the internet and through social media has made it easier than ever to stick with our beliefs and willfully ignore information that threatens those beliefs.

The other issue in the systemic breakdown of trust may not always be the message – it might be the messenger. If science is trying to warn us about a threatening Black Swan, that warning is generally going to be delivered in one of two ways, either through a government official or through the media. And that’s probably where we have our biggest problem. Again, referring to research done by PEW, Americans distrusted journalists almost as much as government. Sixty percent of American Adults had little to no trust in journalists, and a whopping 76% had little to no trust in elected officials.

To go back to my opening line, the good news is science can warn us about Black Swan events and save lives. The bad news is, we have to pay attention to those warnings.

Otherwise, it’s just a boy calling “wolf.”

Harry, Meghan and the Curse of Celebrity

The new Netflix series on Harry and Meghan is not exactly playing out according to plan. A few weeks ago, MediaPost TV Columnist Adam Buckman talked about the series, which promised unprecedented intimate view into the lives of the wayward Royal and his partner; it’s aim being, “– to give the rest of us a full-access pass into every nook and cranny of the lives and minds of Harry and Meghan.”

Since then, reviews have been mixed. While it is (according to Netflix) their most watched documentary ever, the world seems to be responding with a collective yawn. It is certainly not turning out to be the PR boost the two were hoping for, at least based on some viewer reviews on Rotten Tomatoes. Here is just one sample: “A massive whinge fest based on a string of lies, half-truths, and distortions of reality from two of the most privileged people on the planet.”

What I found interesting in this is the complex concept of celebrity, and how it continues to evolve – or more accurately, devolve – in our culture. This is particularly true when we mix our attitudes of modern celebrity with the hoary construct of royalty.

If it does anything, I think Harry and Meghan shows how the very concept of celebrity has turned toxic and has poisoned whatever nominal value you may find in sustaining a monarchy. And, if we are going to dissect the creeping disease of celebrity, we must go to the root of the problem, the media, because our current concept of celebrity didn’t really exist before modern mass media.

We have evolved to keep an eye on those that are at the top of the societal pyramid. It was a good survival tactic to do so. Our apex figureheads – whether they be heroes or gods – served as role models; a literal case of monkey see, monkey do. But it also ensured political survival. There is a bucketload of psychology tucked up in our brains reinforcing this human trait.

In many mythologies, the line between heroes and gods was pretty fuzzy. Also, interestingly, gods were always carnal creatures. The Greek and Roman mythical gods and heroes ostensibly acted as both role models and moral cautionary tales. With great power came great hedonistic appetites.

This gradually evolved into royalty. With kings and queens, there was a very deliberate physical and societal distance kept between royalty and the average subject.  The messy bits of bad behavior that inevitably come with extreme privilege were always kept well hidden from the average subject.  It pretty much played out that way for thousands of years.

There was a yin and yang duality to this type of celebrity that evolved over time. If we trace the roots of the word notorious, we see the beginnings of this duality and get some hints of when it began to unravel.

Notorious comes from the latin notus – meaning to know. It’s current meaning, to be known for something negative, only started in the 17th century. It seems we could accept the duality of notoriety when it came to the original celebrities – our heroes and gods – but with the rise of Christianity and, later, Puritanism (which also hit its peak in the 17th century) we started a whitewash campaign on our own God’s image This had a trickle-down effect in a more strait-laced society. We held our heroes, our God, as well as our kings and queens to a higher standard. We didn’t want to think of them as carnal creatures.

Then, thanks to the media, things got a lot more complicated.

Up until the 19 century, there was really no thing as a celebrity the way we know them today. Those that care about such things generally agree that French actress Sarah Bernhardt was the first modern celebrity. She became such because she knew how to manipulate media. She was the first to get her picture in the press. She was able to tour the world, with the telegraph spreading the word before her arrival. As the 19th century drew to a close, our modern concept of celebrity as being born.

It took a while for this fascination with celebrity spilled over to monarchies. In the case of the house of Windsor (which is a made-up name. The actual name of the family was Saxe-Coburg – Gotha, a decidedly Germanic name that became problematic when England was at war with Germany in World War I) this problem came to a head rather abruptly with King Edward VIII. This was the first royal who revelled in celebrity and who tried to use the media to his advantage. The worlds of celebrity and royalty collided with his abdication in 1936.

In watching Harry and Meghan, I couldn’t help but recount the many, many collisions between celebrity and the Crown since then. The monarchy has always tried to control their image through the media and one can’t help feeling they have been hopelessly naïve in that attempt. Celebrity feeds on itself – it is the nature of the beast – and control is not an option.

Celebrity gives us the illusion of a false intimacy. We mistakenly believe we know the person who is famous, the same as we know those closest to us in our own social circle. We feel we have the right to judge them based on the distorted image we have of them that comes through the media. Somehow, we believe we know what motivates Harry and Meghan, what their ethics entail, what type of person they are.

I suppose one can’t fault Harry and Meghan for trying – yet again – to add their own narrative to the whirling pool of celebrity that surrounds them. But, if history is any indicator, it’s not really a surprise that it’s not going according to their plan.

1,000,000 Words to the Wise

According to my blog, I’ve published 1152 posts since I started it back in 2004. I was 43 when I started writing these posts.

My average post is about 870 words long, so based on my somewhat admittedly limited math skills, that means I’ve written just a smidge over 1 million words in the last 18 years. If I were writing a book, that would have been 1.71 books the length of War and Peace or ten average novels.

For those of you that have been following my output for some of or all of that time, first of all, I thank you. Secondly, you’ll have noticed a slow but steady editorial drift towards existential angst. I suspect that’s a side effect of aging.

For most of us, as we age, we grapple with the nature of the universe. We worry that the problems that lie in the future might be beyond our capabilities to deal with. We fret about the burning dumpster fire we’re leaving for the next generation.

If you’re the average human, we tend to deal with this by narrowing our span of control. We zero in on achieving order with the things which we feel lie within our capabilities. For the average aging guy, this typically manifests itself in obsessions with weed-free lawns, maniacally over-organized garages or driveways free of grease spots. I aspire to achieve at least one of these things before I die.

But along with this obsessive need for order somewhere in our narrowing universe, there’s also a recognition that time is no longer an unlimited commodity for us.  For some of us, we feel we need to leave something meaningful behind. More than a few of us older dudes become obsessed with creating their magnum opus.

Take Albert Einstein, for example. In 1905, which would be known as his annus mirabilis (miracle year), Einstein produced four papers that redefined physics as we knew it. One of them was the paper on special relativity. Einstein was just 26 years old.

As stunning as his achievements were that year, they were not what he wanted to leave as his definitive legacy. He would live another 50 years, until 1955, and spent a good portion of the last half of his life chasing a Unified Field Theory that he hoped would somehow reconcile the explosion of contradiction that came with the emergence of quantum mechanics. He would never be successful in doing so.

In his 1940 essay, ‘A Mathematician’s Apology,’ G.H. Hardy asserted that mathematics was “a young man’s game” and that mathematical ability declined as one got older. By extension, conventional wisdom would have you believe that the same holds true for science — primarily the so-called ‘hard’ sciences like chemistry, biology and especially physics.

Philosophy – on the other hand – is typically a path that doesn’t reach its peak until much later in life. This is true for most of what are called the “soft” sciences, including political science, economics and sociology.

In an admittedly limited but interesting analysis, author and programmer Mark Jeffrey visualized the answer to the question: “At what age do we do our greatest work?” In things like mathematics and physics, notable contributors hit their peak in their mid 30’s. But in the fields of philosophy, literature, art and even architecture, the peak of those included came a decade or two later. As Jeffrey notes, his methodology probably skewed results to the younger side.

This really comes down to two different definitions of intelligence: pure cognitive processing power and an ability to synthesize input from the world around us and – hopefully – add some wisdom to the mix. Some disciplines need a little seasoning – a little experience and perspective. This difference in the nature of our intelligence really drives the age-old debate between hard sciences and soft sciences, as a post from Utah State University explains:.

“Hard sciences use math explicitly; they have more control over the variables and conclusions. They include physics, chemistry and astronomy. Soft sciences use the process of collecting empirical data then use the best methods possible to analyze the information. The results are more difficult to predict. They include economics, political science and sociology.”

In this explanation, you’ll notice a thread I’ve plucked at before, the latest being my last post about Elon Musk and his takeover of Twitter: hard sciences focus on complicated problems and soft sciences look at complex problems. People who are “geek smart” and good at complicated problems tend to peak earlier than those who are willing to tackle complex problems. You’re born with “smart” – but you have to accumulate “wisdom” over your life.

Now, I certainly don’t intend to put myself in the august company quoted above. My path has infinitesimally consequential compared to, say, Albert Einstein’s. But still, I think I get where Einstein was trying to get to when he became obsessed with trying to (literally) bring some order to the universe.

For myself, I have spent much of the last decade or so trying to understand the thorny entanglement of technology and human behavior. I have watched digital technology seep into every aspect of our experience.

And I’m worried. I’m worried because I think this push of technology has been powered by a cabal of those who are “geek smart” but lack the wisdom or humility to ponder the unintended consequences of what they are unleashing. If I gathered even a modicum of the type of intelligence required to warn what may lie on the path ahead, I think I have to keep doing so, even if it takes another million words – give or take.

It Should Be No Surprise that Musk is Messing Up Twitter

I have to admit – I’m somewhat bemused by all the news rolling out of Elon Musk’s V2.0 edition of Twitter. Here is just a quick round up of headlines grabbed from a Google News search last week:

Elon Musk took over a struggling business with Twitter and has quickly made it worse – CNBC

Elon Musk is Bad at This – The Atlantic

The Elon Musk (Twitter) Era Has Been a Complete Mess – Vanity Fair

Elon Musk “Straight-up Alone,” “Winging” Twitter Changes – Business Insider

To all these, I have to say, “What the Hell did you expect?”

Look, I get that Musk is on a different plane of smart from most of us. No argument there.

The same is true, I suspect, for most tech CEOs who are the original founders of their company. The issue is that the kind of smart they are is not necessarily the kind of smart you need to run a big complex corporation. If you look at the various types of intelligence, they would excel at logical-mathematical intelligence – or what I would call “geek-smart.” But this intelligence can often come at the expense of other kinds of intelligence that would be a better fit in the CEO’s role. Both interpersonal and intrapersonal intelligence immediately come to mind.

Musk is not alone. There is a bushel load of Tech CEOs who have pulled off a number of WTF moves. In his article in the Atlantic titled Silicon Valley’s Horrible Bosses, Charlie Warzel gives us a few examples ripped straight from the handbook of the “Elon Musk School of Management.” Most of them involve making hugely impactful HR decisions with little concern for the emotional impact on employees and then doubling down on mistake by choosing to communicate through Twitter.

For most of us with even a modicum of emotional intelligence, this is unimaginable. But if you’re geek-smart, it probably seems logical. Twitter is a perfect communication medium for geek-smart people – it’s one-sided, as black and white as you can get and conveniently limited to 280 characters. There is no room for emotional nuance or context on Twitter.

The disconnect in intelligence types comes in looking at the type of problems a CEO faces. I was CEO of a very small company and even at that scale, with a couple dozen employees, I spent the majority of my time dealing with HR issues. I was constantly trying to navigate my way through these thorny and perplexing issues. I did learn one thing – issues that include people, whether they be employees or customers, generally fall into the category of what is called a “complex problem.”

In 1999, an IBM manager named Dave Snowden realized that not every problem you run into when managing a corporation requires the same approach. He put together a decision-making model to help managers identify the best decision strategy for the issue they’re dealing with. He called the model Cynefin, which is the Welsh word for habitat. In the model, there are five decision domains: Clear, Complicated, Complex, Chaotic and Confusion. Cynefin is really a sense-making tool to help guide managers through problems that are complicated or complex in the hope that chaos can be avoided.

Geek Smart People are very good at complicated problems. This is the domain of the “expert” who can rapidly sift through the “known unknowns.”

Give an expert a complicated problem and they’re the perfect fit for the job. They have the ability to hone in on the relevant details and parse out the things that would distract the rest of us. Cryptography is an example of a complicated problem. So is most coding. This is the natural habitat of the tech engineer.

Tech founders initially become successful because they are very good at solving complicated problems. In fact, in our culture, they are treated like rock stars. They are celebrated for their “expertise.” Typically, this comes with a “smartest person in the room” level of smugness. They have no time for those that don’t see through the complications of the world the same way they do.

Here we run into a cognitive obstacle uncovered by political science writer Philip E. Tetlock in his 2005 book, Expert Political Judgement: How Good Is It? How Can We Know?

As Tetlock discovered, expertise in one domain doesn’t always mean success in another, especially if one domain has complicated problems and the other has complex problems.

Complex problems, like predicting the future or managing people in a massive organization, lie in the realm of “unknown unknowns.” Here, the answer is emergent. These problems are, by their very nature, unpredictable. The very toughest complex problems fall into a category I’ve talked about before: Wicked Problems. And, as Philip Tetlock discovered, experts are no better at dealing with complexity than the rest of us. In fact, in a complex scenario like predicting the future, you’d probably have just as much success with a dart throwing chimpanzee.

But it gets worse. There’s no shame in not being good at complex problems. None of us are. The problem with expertise lies not in a lack of knowledge, but in experts sticking to a cognitive style ill-suited to the task at hand: trying to apply complicated brilliance to complex situations. I call this the “everything is a nail” syndrome. When all you have is a hammer, everything looks like a nail.

Tetlock explains, “ They [experts] are just human in the end. They are dazzled by their own brilliance and hate to be wrong. Experts are led astray not by what they believe, but by how they think.”

A Geek-Smart person believes they know the answer better than anyone else because they see the world differently. They are not open to outside input. And it’s just that type of open-minded thinking that is required to wrestle with complex problems.

When you consider all that, is it any wonder that Musk is blowing up Twitter –  and not in a good way?

My Many Problems with the Metaverse

I recently had dinner with a comedian who had just did his first gig in the Metaverse. It was in a new Meta-Comedy Club. He was excited and showed me a recording of the gig.

I have to admit, my inner geek thought it was very cool: disembodied hands clapping with avataresque names floating above, bursts of virtual confetti for the biggest laughs and even a virtual-hook that instantly snagged meta-hecklers, banning them to meta-purgatory until they promised to behave. The comedian said he wanted to record a comedy meta-album in the meta-club to release to his meta-followers.

It was all very meta.

As mentioned, as a geek I’m intrigued by the Metaverse. But as a human who ponders our future (probably more than is healthy) – I have grave concerns on a number of fronts. I have mentioned most of these individually in previous posts, but I thought it might be useful to round them up:

Removed from Reality

My first issue is that the Metaverse just isn’t real. It’s a manufactured reality. This is at the heart of all the other issues to come.

We might think we’re clever, and that we can manufacturer a better world than the one that nature has given us, but my response to that would be Orgel’s Second Rule, courtesy of Sir Francis Crick, co-discoverer of DNA: “Evolution is cleverer than you are.”

For millions of years, we have evolved to be a good fit in our natural environment. There are thousands of generations of trial and error baked into our DNA that make us effective in our reality. Most of that natural adaptation lies hidden from us, ticking away below the surface of both our bodies and brains, silently correcting course to keep us aligned and functioning well in our world.

But we, in our never-ending human hubris, somehow believe we can engineer an environment better than reality in less than a single generation. If we take Second Life as the first iteration of the metaverse, we’re barely two decades into the engineering of a meta-reality.

If I was placing bets on who is the better environmental designer for us, humans or evolution, my money would be on evolution, every time.

Who’s Law is It Anyway?

One of the biggest selling features of the Metaverse is that it frees us from the restrictions of geography. Physical distance has no meaning when we go meta.

But this also has issues. Societies need laws and our laws have evolved to be grounded within the boundaries of geographical jurisdictions. What happens when those geographical jurisdictions become meaningless? Right now, there are no laws specifically regulating the Metaverse. And even if there are laws in the future, in what jurisdiction would they be enforced?

This is a troubling loophole – and by hole I mean a massive gaping metaverse-sized void. You know who is attracted by a lack of laws? Those who have no regard for the law. If you don’t think that criminals are currently eyeing the metaverse looking for opportunity, I have a beautiful virtual time-share condo in the heart of meta-Boca Raton that I’d love to sell you.

Data is Matter of the Metaverse

Another “selling feature” for the metaverse is the ability to append metadata to our own experiences, enriching them with access to information and opportunities that would be impossible in the real world. In the metaverse, the world is at our fingertips – or in our virtual headset – as the case may be. We can stroll through worlds, real or imagined, and the sum of all our accumulated knowledge is just one user-prompt away.

But here’s the thing about this admittedly intriguing notion: it makes data a commodity and commodities are built to be exchanged based on market value. In order to get something of value, you have to exchange something of value. And for the builders of the metaverse, that value lies in your personal data. The last shreds of personal privacy protection will be gone, forever!

A For-Profit Reality

This brings us to my biggest problem with the Metaverse – the motivation for building it. It is being built not by philanthropists or philosophers, academics or even bureaucrats. The metaverse is being built by corporations, who have to hit quarterly profit projections. They are building it to make a buck, or, more correctly, several billion bucks.

These are the same people who have made social media addictive by taking the dirtiest secrets of Las Vegas casinos and using them to enslave us through our smartphones. They have toppled legitimate governments for the sake of advertising revenue. They have destroyed our concept of truth, bashed apart the soft guardrails of society and are currently dismantling democracy. There is no noble purpose for a corporation – their only purpose is profit.

Do you really want to put your future reality in those hands?

The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

Dealing with Daily Doom

“We are Doomed”

The tweet came yesterday from a celebrity I follow. And you know what? I didn’t even bother to look to find out in which particular way we were doomed. That’s probably because my social media feeds are filled by daily predictions of doom. The end being nigh has ceased to be news. It’s become routine. That is sad. But more than that, it’s dangerous.

This is why Joe Mandese and I have agreed to disagree about the role media can play in messaging around climate change , or – for that matter – any of the existential threats now facing us. Alarmist messaging could be the problem, not the solution.

Mandese ended his post with this:

“What the ad industry really needs to do is organize a massive global campaign to change the way people think, feel and behave about the climate — moving from a not-so-alarmist “change” to an “our house is on fire” crisis.”

Joe Mandese – Mediapost

But here’s the thing. Cranking up the crisis intensity on our messaging might have the opposite effect. It may paralyze us.

Something called “doom scrolling” is now very much a thing. And if you’re looking for Doomsday scenarios, the best place to start is the Subreddit r/collapse thread.

In a 30 second glimpse during the writing of this column, I discovered that democracy is dying, America is on the brink of civil war, Russia is turning off the tap on European oil supplies, we are being greenwashed into complacency, the Amazon Rainforest may never recover from its current environmental destruction and the “Doomsday” glacier is melting faster than expected. That was all above-the-fold. I didn’t even have to scroll for this buffet of all-you-can eat disaster. These were just the appetizers.

There is a reason why social media feeds are full of doom. We are hardwired to pay close attention to threats. This makes apocalyptic prophesizing very profitable for social media platforms. As British academic Julia Bell said in her 2020 book, Radical Attention,

“Behind the screen are impassive algorithms designed to ensure that the most outrageous information gets to our attention first. Because when we are enraged, we are engaged, and the longer we are engaged the more money the platform can make from us.”

Julia Bell – Radical Attention

But just what does a daily diet of doom do for our mental health? Does constantly making us aware of the impending end of our species goad us into action? Does it actually accomplish anything?

Not so much. In fact, it can do the opposite.

Mental health professionals are now treating a host of new climate change related conditions, including eco-grief, eco-anxiety and eco-depression. But, perhaps most alarmingly, they are now encountering something called eco-paralysis.

In an October 2020 Time.com piece on doom scrolling, psychologist Patrick Kennedy-Williams, who specializes in treating climate-related anxieties, was quoted, ““There’s something inherently disenfranchising about someone’s ability to act on something if they’re exposed to it via social media, because it’s inherently global. There are not necessarily ways that they can interact with the issue.” 

So, cranking up the intensity of the messaging on existential threats such as climate change may have the opposite effect, by scaring us into doing nothing. This is because of something called Yerkes-Dodson Law.

By Yerkes and Dodson 1908 – Diamond DM, et al. (2007). “The Temporal Dynamics Model of Emotional Memory Processing: A Synthesis on the Neurobiological Basis of Stress-Induced Amnesia, Flashbulb and Traumatic Memories, and the Yerkes-Dodson Law”. Neural Plasticity: 33. doi:10.1155/2007/60803. PMID 17641736., CC0, https://commons.wikimedia.org/w/index.php?curid=34030384

This “Law”, discovered by psychologists Robert Yerkes and John Dodson in 1908, isn’t so much a law as a psychological model. It’s a typical bell curve. On the front end, we find that our performance in responding to a situation increases along with our attention and interest in that situation. But the line does not go straight up. At some point, it peaks and then goes downhill. Intent gives way to anxiety. The more anxious we become, the more our performance is impaired.

When we fret about the future, we are actually grieving the loss of our present. In this process, we must make our way through the 5 stages of grief introduced by psychiatrist Elisabeth Kübler-Ross in 1969 through her work with terminally ill patients. The stages are: Denial, Anger, Bargaining, Depression and Acceptance.

One would think that triggering awareness would help accelerate us through the stages. But there are a few key differences. In dealing with a diagnosis of terminal illness, typically there is one hammer-blow event when you become aware of the situation. From there dealing with it begins. And – even when it begins – it’s not a linear journey. As anyone who has ever grieved will tell you, what stage you’re in depends on which day I’m talking to you. You can slip for Acceptance to Anger in a heartbeat.

With climate change, awareness doesn’t come just once. The messaging never ends. It’s a constant cycle of crisis, trapping us in a loop that cycles between denial, depression and despair.

An excellent post on Climateandmind.org on climate grief talks about this cycle and how we get trapped within it. Some of us get stuck in a stage and never move on. Even climate scientist and activist Susanne Moser admits to being trapped in something she calls Functional Denial,

“It’s that simultaneity of being fully aware and conscious and not denying the gravity of what we’re creating (with Climate Change), and also having to get up in the morning and provide for my family and fulfill my obligations in my work.”

Susan Moser

It’s exactly this sense of frustration I voiced in my previous post. But the answer is not making me more aware. Like Moser, I’m fully aware of the gravity of the various threats we’re facing. It’s not attention I lack, it’s agency.

I think the time to hope for a more intense form of messaging to prod the deniers into acceptance is long past. If they haven’t changed their minds yet, they ain’t goin’ to!

I also believe the messaging we need won’t come through social media. There’s just too much froth and too much profit in that froth.

What we need – from media platforms we trust – is a frank appraisal of the worst-case scenario of our future. We need to accept that and move on to deal with what is to come. We need to encourage resilience and adaptability. We need hope that while what is to come is most certainly going to be catastrophic, it doesn’t have to be apocalyptic.

We need to know we can survive and start thinking about what that survival might look like.