The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

My Many Problems with the Metaverse

I recently had dinner with a comedian who had just did his first gig in the Metaverse. It was in a new Meta-Comedy Club. He was excited and showed me a recording of the gig.

I have to admit, my inner geek thought it was very cool: disembodied hands clapping with avataresque names floating above, bursts of virtual confetti for the biggest laughs and even a virtual-hook that instantly snagged meta-hecklers, banning them to meta-purgatory until they promised to behave. The comedian said he wanted to record a comedy meta-album in the meta-club to release to his meta-followers.

It was all very meta.

As mentioned, as a geek I’m intrigued by the Metaverse. But as a human who ponders our future (probably more than is healthy) – I have grave concerns on a number of fronts. I have mentioned most of these individually in previous posts, but I thought it might be useful to round them up:

Removed from Reality

My first issue is that the Metaverse just isn’t real. It’s a manufactured reality. This is at the heart of all the other issues to come.

We might think we’re clever, and that we can manufacturer a better world than the one that nature has given us, but my response to that would be Orgel’s Second Rule, courtesy of Sir Francis Crick, co-discoverer of DNA: “Evolution is cleverer than you are.”

For millions of years, we have evolved to be a good fit in our natural environment. There are thousands of generations of trial and error baked into our DNA that make us effective in our reality. Most of that natural adaptation lies hidden from us, ticking away below the surface of both our bodies and brains, silently correcting course to keep us aligned and functioning well in our world.

But we, in our never-ending human hubris, somehow believe we can engineer an environment better than reality in less than a single generation. If we take Second Life as the first iteration of the metaverse, we’re barely two decades into the engineering of a meta-reality.

If I was placing bets on who is the better environmental designer for us, humans or evolution, my money would be on evolution, every time.

Who’s Law is It Anyway?

One of the biggest selling features of the Metaverse is that it frees us from the restrictions of geography. Physical distance has no meaning when we go meta.

But this also has issues. Societies need laws and our laws have evolved to be grounded within the boundaries of geographical jurisdictions. What happens when those geographical jurisdictions become meaningless? Right now, there are no laws specifically regulating the Metaverse. And even if there are laws in the future, in what jurisdiction would they be enforced?

This is a troubling loophole – and by hole I mean a massive gaping metaverse-sized void. You know who is attracted by a lack of laws? Those who have no regard for the law. If you don’t think that criminals are currently eyeing the metaverse looking for opportunity, I have a beautiful virtual time-share condo in the heart of meta-Boca Raton that I’d love to sell you.

Data is Matter of the Metaverse

Another “selling feature” for the metaverse is the ability to append metadata to our own experiences, enriching them with access to information and opportunities that would be impossible in the real world. In the metaverse, the world is at our fingertips – or in our virtual headset – as the case may be. We can stroll through worlds, real or imagined, and the sum of all our accumulated knowledge is just one user-prompt away.

But here’s the thing about this admittedly intriguing notion: it makes data a commodity and commodities are built to be exchanged based on market value. In order to get something of value, you have to exchange something of value. And for the builders of the metaverse, that value lies in your personal data. The last shreds of personal privacy protection will be gone, forever!

A For-Profit Reality

This brings us to my biggest problem with the Metaverse – the motivation for building it. It is being built not by philanthropists or philosophers, academics or even bureaucrats. The metaverse is being built by corporations, who have to hit quarterly profit projections. They are building it to make a buck, or, more correctly, several billion bucks.

These are the same people who have made social media addictive by taking the dirtiest secrets of Las Vegas casinos and using them to enslave us through our smartphones. They have toppled legitimate governments for the sake of advertising revenue. They have destroyed our concept of truth, bashed apart the soft guardrails of society and are currently dismantling democracy. There is no noble purpose for a corporation – their only purpose is profit.

Do you really want to put your future reality in those hands?

The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

The Physical Foundations of Friendship

It’s no secret that I worry about what the unintended consequences might be for us as we increasingly substitute a digital world for a physical one. What might happen to our society as we spend less time face-to-face with people and more time face-to-face with a screen?

Take friendship, for example. I have written before about how Facebook friends and real friends are not the same thing. A lot of this has to do with the mental work required to maintain a true friendship. This cognitive requirement led British anthropologist Robin Dunbar to come up with something called Dunbar’s Number – a rough rule-of-thumb that says we can’t really maintain a network of more than 150 friends, give or take a few.

Before you say, “I have way more friends on Facebook than that,” realize that I don’t care what your Facebook Friend count is. Mine numbers at least 3 times more than Dunbar’s 150 limit. But they are not all true friends. Many are just the result of me clicking a link on my laptop. It’s quick, it’s easy, and there is absolutely no requirement to put any skin in the game. Once clicked, I don’t have to do anything to maintain these friendships. They are just part of a digital tally that persists until I might click again, “unfriending” them. Nowhere is the ongoing physical friction that demands the maintenance required to keep a true friendship from slipping into entropy.

So I was wondering – what is that magical physical and mental alchemy that causes us to become friends with someone in the first place? When we share physical space with another human, what is the spark that causes us to want to get to know them better? Or – on the flip side – what are the red flags that cause us to head for the other end of the room to avoid talking to them? Fortunately, there is some science that has addressed those questions.

We become friends because of something in sociology call homophily – being like each other. In today’s world, that leads to some unfortunate social consequences, but in our evolutionary environment, it made sense. It has to do with kinship ties and what ethologist Richard Dawkins called The Selfish Gene. We want family to survive to pass on our genes. The best way to motivate us to protect others is to have an emotional bond to them. And it just so happens that family members tend to look somewhat alike. So we like – or love – others who are like us.

If we tie in the impact of geography over our history, we start to understand why this is so. Geography that restricted travel and led to inbreeding generally dictated a certain degree of genetic “sameness” in our tribe. It was a quick way to sort in-groups from out-groups. And in a bloodier, less politically correct world, this was a matter of survival.

But this geographic connection works both ways. Geographic restrictions lead to homophily, but repeated exposure to the same people also increases the odds that you’ll like them. In psychology, this is called mere-exposure effect.

In these two ways, the limitations of a physical world has a deep, deep impact on the nature of friendship. But let’s focus on the first for a moment. 

It appears we have built-in “friend detectors” that can actually sense genetic similarities. In a rather fascinating study, Nicholas Christakis and James Fowler found that friends are so alike genetically, they could actually be family. If you drill down to the individual building blocks of a gene at the nucleotide level, your friends are as alike genetically to you as your fourth cousin. As Christakis and Fowler say in their study, “friends may be a kind of ‘functional kin’.”

This shows how deeply friendships bonds are hardwired into us. Of course, this doesn’t happen equally across all genes. Evolution is nothing if not practical. For example, Christakis and Fowler found that specific systems do stay “heterophilic” (not alike) – such as our immune system. This makes sense. If you have a group of people who stay in close proximity to each other, it’s going to remain more resistant to epidemics if there is some variety in what they’re individually immune to. If everyone had exactly the same immunity profile, the group would be highly resistant to some bugs and completely vulnerable to others. It would be putting all your disease prevention eggs in one basket.

But in another example of extreme genetic practicality, how similar we smell to our friends can be determined genetically.  Think about it. Would you rather be close to people who generally smell the same, or those that smell different? It seems a little silly in today’s world of private homes and extreme hygiene, but when you’re sharing very close living quarters with others and there’s no such thing as showers and baths, how everyone smells becomes extremely important.

Christakis and Fowler found that our olfactory sensibilities tend to trend to the homophilic side between friends. In other words, the people we like smell alike. And this is important because of something called olfactory fatigue. We use smell as a difference detector. It warns us when something is not right. And our nose starts to ignore smells it gets used to, even offensive ones. It’s why you can’t smell your own typical body odor. Or, in another even less elegant example, it’s why your farts don’t stink as much as others. 

Given all this, it would make sense that if you had to spend time close to others, you would pick people who smelled like you. Your nose would automatically be less sensitive to their own smells. And that’s exactly what a new study from the Weizmann Institute of Science found. In the study, the scent signatures of complete strangers were sampled using an electronic sniffer called an eNose. Then the strangers were asked to engage in nonverbal social interactions in pairs. After, they were asked to rate each interaction based on how likely they would be to become friends with the person. The result? Based on their smells alone, the researchers were able to predict with 71% accuracy who would become friends.

The foundations of friendship run deep – down to the genetic building blocks that make us who we are. These foundations were built in a physical world over millions of years. They engage senses that evolved to help us experience that physical world. Those foundations are not going to disappear in the next decade or two, no matter how addictive Facebook or TikTok becomes. We can continue to layer technology over these foundations, but to deny them it to ignore human nature.

Making Time for Quadrant Two

Several years ago, I read Stephen Covey’s “The 7 Habits of Highly Effective People.” It had a lasting impact on me. Through my life, I have found myself relearning those lessons over and over again.

One of them was the four quadrants of time management. How we spend our time in these quadrants determines how effective we are.

 Imagine a box split into four quarters. On the upper left box, we’ll put a label: “Important and Urgent.” Next to it, in the upper right, we’ll put a label saying “Important But Not Urgent.” The label for the lower left is “Urgent but Not Important.” And the last quadrant — in the lower right — is labeled “Not Important nor Urgent.”

The upper left quadrant — “Important and Urgent” — is our firefighting quadrant. It’s the stuff that is critical and can’t be put off, the emergencies in our life.

We’ll skip over quadrant two — “Important But Not Urgent” — for a moment and come back to it.

In quadrant three — “Urgent But Not Important” — are the interruptions that other people brings to us. These are the times we should say, “That sounds like a you problem, not a me problem.”

Quadrant four is where we unwind and relax, occupying our minds with nothing at all in order to give our brains and body a chance to recharge. Bingeing Netflix, scrolling through Facebook or playing a game on our phones all fall into this quadrant.

And finally, let’s go back to quadrant two: “Important But Not Urgent.” This is the key quadrant. It’s here where long-term planning and strategy live. This is where we can see the big picture.

The secret of effective time management is finding ways to shift time spent from all the other quadrants into quadrant two. It’s managing and delegating emergencies from quadrant one, so we spend less time fire-fighting. It’s prioritizing our time above the emergencies of others, so we minimize interruptions in quadrant three. And it’s keeping just enough time in quadrant four to minimize stress and keep from being overwhelmed.

The lesson of the four quadrants came back to me when I was listening to an interview with Dr. Sandro Galea, epidemiologist and author of “The Contagion Next Time.” Dr. Galea was talking about how our health care system responded to the COVID pandemic. The entire system was suddenly forced into quadrant one. It was in crisis mode, trying desperately to keep from crashing. Galea reminded us that we were forced into this mode, despite there being hundreds of lengthy reports from previous pandemics — notably the SARS crisis–– containing thousands of suggestions that could have helped to partially mitigate the impact of COVID.

Few of those suggestions were ever implemented. Our health care system, Galea noted, tends to continually lurch back and forth within quadrant one, veering from crisis to crisis. When a crisis is over, rather than go to quadrant two and make the changes necessary to avoid similar catastrophes in the future, we put the inevitable reports on a shelf where they’re ignored until it is — once again — too late.

For me, that paralleled a theme I have talked about often in the past — how we tend to avoid grappling with complexity. Quadrant two stuff is, inevitably, complex in nature. The quadrant is jammed with what we call wicked problems. In a previous column, I described these as, “complex, dynamic problems that defy black-and-white solutions. These are questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough — for now.’”

That’s quadrant two in a nutshell. Quadrant-one problems must be triaged into a sort of false clarity. You have to deal with the critical stuff first. The nuances and complexity are, by necessity, ignored. That all gets pushed to quadrant two, where we say we will deal with it “someday.”

Of course, someday never comes. We either stay in quadrant one, are hijacked into quadrant three, or collapse through sheer burn-out into quadrant four. The stuff that waits for us in quadrant two is just too daunting to even consider tackling.

This has direct implications for technology and every aspect of the online world. Our industry, because of its hyper-compressed timelines and the huge dollars at stake, seems firmly lodged in the urgency of quadrant one. Everything on our to-do list tends to be a fire we have to put out. And that’s true even if we only consider the things we intentionally plan for. When we factor in the unplanned emergencies, quadrant one is a time-sucking vortex that leaves nothing for any of the other quadrants.

But there is a seemingly infinite number of quadrant two things we should be thinking about. Take social media and privacy, for example. When an online platform has a massive data breach, that is a classic quadrant one catastrophe. It’s all hands on deck to deal with the crisis. But all the complex questions around what our privacy might look like in a data-inundated world falls into quadrant two. As such, they are things we don’t think much about. It’s important, but it’s not urgent.

Quadrant two thinking is systemic thinking, long-term and far-reaching. It allows us to build the foundations that helps to mitigate crisis and minimize unintended consequences.

In a world that seems to rush from fire to fire, it is this type of thinking that could save our asses.

Pursuing a Plastic Perfection

“Within every dystopia, there’s a little utopia”

— novelist Margaret Atwood

We’re a little obsessed with perfection. For myself, this has taken the form of a lifelong crush on Mary Poppins (Julie Andrews from the 1964 movie), who is “practically perfect in every way.”

We’ve been seeking perfection for some time now. The idea of creating Utopia, a place where everything is perfect, has been with us since the Garden of Eden. As humans have trodden down our timeline, we have been desperately seeking mythical Utopias, then religious ones, which then led to ideological ones.

Some time at the beginning of the last century, we started turning to technology and science for perfection. Then, in the middle of the 20th century, we abruptly swung the other way, veering towards Dystopia while fearing that technology would take us to the dark side, a la George Orwell’s “1984” and Aldous Huxley’s “Brave New World.”

Lately, other than futurist Ray Kurzweil and the starry-eyed engineers of Silicon Valley, I think it’s fair to say that most of us have accepted that technology is probably a mixed bag at best: some good and some bad. Hopefully, when the intended consequences are tallied with the unintended ones, we net out a little to the positive. But we can all agree that it’s a long way from perfection.

This quest for perfection is taking some bizarre twists. Ultimately, it comes down to what we feel we can control, focusing our lives on the thinnest of experiences: that handful of seconds that someone pays attention to our social media posts.

It’s a common psychological reaction: the more we feel that our fate is beyond our control, the more we obsess about those things we feel we can control. And on social media, if we can’t control our world, our country, our town or even our own lives, perhaps our locus of control becomes narrowed to the point where the only thing left is our own appearance.

This effect is exacerbated by our cultural obsession with physical attractiveness. Beauty may only be skin deep, but in our world, it seems to count for everything that matters. Especially on Snapchat and Instagram.

And where there’s a need, there is a technological way. Snapchat filters that offer digitally altered perfection have proliferated. One is Facetune 2,  a retouching app that takes your selfie and adjusts lighting, removes blemishes, whitens teeth and nudges you closer and closer to perfection.

In one blog post, the Facetune team, inspired by Paris Hilton, encourages you to start “sliving.” Not sure what the hell “sliving” is? Apparently, it’s a combination of “slaying it” and “living your best life.” It’s an updated version of “that’s hot” for a new audience.

Of course, it doesn’t hurt if you happen to look like Ms. Hilton or Kim Kardashian. The post assures us that it’s not all about appearance. Apparently, “owning it” and “being kind to yourself” are also among the steps to better “sliving.” But as you read down the post, it does ultimately come back to how you look, reinforced with this pearl of wisdom: “a true sliv’ is also going to look their absolute best when it counts”

And if that sounds about as deep as Saran Wrap, what do you expect when you turn to Paris Hilton for your philosophy of life? Plato she’s not.

Other social filter apps go even farther, essentially altering your picture until it’s no longer recognizable. Bulges are gone, to be replaced by chiseled torsos and optimally rounded butts. Cheeks are digitally sucked in and noses are planed to perfection. Eyes sparkle and teeth gleam. The end product? Sure, it looks amazing. It’s just not you anymore.

With all this pressure put on having a perfect appearance, it’s little wonder that it’s royally messing with our heads (what’s inside the head, not the outside). Hence the new disease of Snapchat Dysmorphia. I wish it were harder to believe in this syndrome — but it’s when people, many of them young girls, book a consultation with a plastic surgeon, wanting to look exactly like the result of their filtered Snapchat selfies.

According to one academic article, one in 50 Americans suffers from body dysmorphic disorder, where sufferers

“are preoccupied with at least one nonexistent or slight defect in physical appearance. This can lead them to think about the defect for at least one hour a day, therefore impacting their social, occupational, and other levels of functioning. The individual also should have repetitive and compulsive behaviors due to concerns arising from their appearances. This includes mirror checking and reassurance seeking among others.”

There’s nothing wrong with wanting perfection. As the old saying goes, it might be the enemy of good, but it can be a catalyst for better. We just have to go on knowing that perfection is never going to be attainable.

But social media is selling us a bogus bill of goods: The idea that perfect is possible and that everyone but us has figured it out.  

The Joe Rogan Experiment in Ethical Consumerism

We are watching an experiment in ethical consumerism take place in real time. I’m speaking of the Joe Rogan/Neil Young controversy that’s happening on Spotify. I’m sure you’ve heard of it, but if not, Canadian musical legend Neil Young had finally had enough of Joe Rogan’s spreading of COVID misinformation on his podcast, “The Joe Rogan Experience.” He gave Spotify an ultimatum: “You can have Rogan or Young. Not both.”

Spotify chose Rogan. Young pulled his library. Since then, a handful of other artists have followed Young, including former band mates David Crosby, Stephen Stills and Graham Nash, along with fellow Canuck Hall of Famer Joni Mitchell.

But it has hardly been a stampede. One of the reasons is that — if you’re an artist — leaving Spotify is easier said than done. In an interview with Rolling Stone, Rosanne Cash said most artists don’t have the luxury of jilting Spotify: 

It’s not viable for most artists. The public doesn’t understand the complexities. I’m not the sole rights holder to my work… It’s not only that a lot of people who aren’t rights holders can’t remove their work. A lot of people don’t want to. These are the digital platforms where they make a living, as paltry as it is. That’s the game. These platforms own, what, 40 percent of the market share?”

Cash also brings up a fundamental issue with capitalism: it follows profit, and it’s consumers who determine what’s profitable. Consumers make decisions based on self-interest: what’s in it for them. Corporations use that predictable behavior to make the biggest profit possible. That behavior has been perfectly predictable for hundreds of years. It’s the driving force behind Adam Smith’s Invisible Hand. It was also succinctly laid out by economist Milton Friedman in 1970:

“There is one and only one social responsibility of business–to use its resources and engage in activities designed to increase its profits so long as it stays within the rules of the game, which is to say, engages in open and free competition without deception or fraud.”

We all want corporations to be warm and fuzzy — but it’s like wishing a shark were a teddy bear. It just ain’t gonna happen.

One who indulged in this wishful thinking was a little less well-known Canadian artist who also pulled his music  from Spotify, Ontario singer/songwriter Danny Michel. He told the CBC:

“But for me, what it was was seeing how Spotify chose to react to Neil Young’s request, which was, you know: You can have my music or Joe. And it seems like they just, you know, got out a calculator, did some math, and chose to let Neil Young go. And they said, clear and loud: We don’t need you. We don’t need your music.”

Well, yes, Danny, I’m pretty sure that’s exactly what Spotify did. It made a decision based on profit. For one thing, Joe Rogan is exclusive to Spotify. Neil Young isn’t. And Rogan produces a podcast, which can have sponsors. Neil Young’s catalog of songs can’t be brought to you by anyone.

That makes Rogan a much better bet for revenue generation. That’s why Spotify paid Rogan $100 million. Music journalist Ted Gioia made the business case for the Rogan deal pretty clear in a tweet

“A musician would need to generate 23 billion streams on Spotify to earn what they’re paying Joe Rogan for his podcast rights (assuming a typical $.00437 payout per stream). In other words, Spotify values Rogan more than any musician in the history of the world.”

I hate to admit that Milton Friedman is right, but he is. I’ve said it time and time before, to expect corporations to put ethics ahead of profits is to ignore the DNA of a corporation. Spotify is doing what corporations will always do, strive to be profitable. The decision between Rogan and Young was done with a calculator. And for Danny Michel to expect anything else from Spotify is simply naïve. If we’re going to play this ethical capitalism game, we must realize what the rules of engagement are.

But what about us? Are we any better that the corporations we keep putting our faith in?

We have talked about how we consumers want to trust the brands we deal with, but when a corporation drops the ethics ball, do we really care? We have been gnashing our teeth about Facebook’s many, many indiscretions for years now, but how many of us have quite Facebook? I know I haven’t.

I’ve seen some social media buzz about migrating from Spotify to another service. I personally have started down this road. Part of it is because I agree with Young’s stand. But I’ll be brutally honest here. The bigger reason is that I’m old and I want to be able to continue to listen to the Young, Mitchell and CSNY catalogs. As one of my contemporaries said in a recent post, “Neil Young and Joni Mitchell? Wish it were artists who are _younger_ than me.”

A lot of pressure is put on companies to be ethical, with no real monetary reasons why they should be. If we want ethics from our corporations, we have to make it important enough to us to impact our own buying decisions. And we aren’t doing that — not in any meaningful way.

I’ve used this example before, but it bears repeating. We all know how truly awful and unethical caged egg production is. The birds are kept in what is known as a battery cage holding 5 to 10 birds and each is confined to a space of about 67 square inches. To help you visualize that, it’s just a bit bigger than a standard piece of paper folded in half. This is the hell we inflict on other animals solely for our own gain. No one can be for this. Yet 97% of us buy these eggs, just because they’re cheaper.

If we’re looking for ethics, we have to look in other places than brands. And — much as I wish it were different — we have to look beyond consumers as well. We have proven time and again that our convenience and our own self-interest will always come ahead of ethics. We might wish that were different, but our spending patterns say otherwise.

Why Is Willful Ignorance More Dangerous Now?

In last week’s post, I talked about how the presence of willful ignorance is becoming something we not only have to accept, but also learn how to deal with. In that post, I intimated that the stakes are higher than ever, because willful ignorance can do real damage to our society and our world.

So, if we’ve lived with willful ignorance for our entire history, why is it now especially dangerous? I suspect it’s not so much that willful ignorance has changed, but rather the environment in which we find it.

The world we live in is more complex because it is more connected. But there are two sides to this connection, one in which we’re more connected, and one where we’re further apart than ever before.

Technology Connects Us…

Our world and our society are made of networks. And when it comes to our society, connection creates networks that are more interdependent, leading to complex behaviors and non-linear effects.

We must also realize that our rate of connection is accelerating. The pace of technology has always been governed by Moore’s Law, the tenet that the speed and capability of our computers will double every two years. For almost 60 years, this law has been surprisingly accurate.

What this has meant for our ability to connect digitally is that the number and impact of our connections has also increased exponentially, and it will continue to increase in our future. This creates a much denser and more interconnected network, but it has also created a network that overcomes the naturally self-regulating effects of distance.

For the first time, we can have strong and influential connections with others on the other side of the globe. And, as we forge more connections through technology, we are starting to rely less on our physical connections.

And Drives Us Further Apart

The wear and tear of a life spent bumping into each other in a physical setting tends to smooth out our rougher ideological edges. In face-to-face settings, most of us are willing to moderate our own personal beliefs in order to conform to the rest of the crowd. Exactly 80 years ago, psychologist Solomon Asch showed how willing we were to ignore the evidence of our own eyes in order to conform to the majority opinion of a crowd.

For the vast majority of our history, physical proximity has forced social conformity upon us. It leavens out our own belief structure in order to keep the peace with those closest to us, fulfilling one of our strongest evolutionary urges.

But, thanks to technology, that’s also changing. We are spending more time physically separated but technically connected. Our social conformity mechanisms are being short-circuited by filter bubbles where everyone seems to share our beliefs. This creates something called an availability bias:  the things we see coming through our social media feeds forms our view of what the world must be like, even though statistically it is not representative of reality.

It gives the willfully ignorant the illusion that everyone agrees with them — or, at least, enough people agree with them that it overcomes the urge to conform to the majority opinion.

Ignorance in a Chaotic World

These two things make our world increasingly fragile and subject to what chaos theorists call the Butterfly Effect, where seemingly small things can make massive differences.

It’s this unique nature of our world, which is connected in ways it never has been before, that creates at least three reasons why willful ignorance is now more dangerous than ever:

One: The impact of ignorance can be quickly amplified through social media, causing a Butterfly Effect cascade. Case in point, the falsehood that the U.S. election results weren’t valid, leading to the Capitol insurrection of Jan. 6.

The mechanics of social media that led to this issue are many, and I have cataloged most of them in previous columns: the nastiness that comes from arm’s-length discourse, a rewiring of our morality, and the impact of filter bubbles on our collective thresholds governing anti-social behaviors.

Secondly, and what is probably a bigger cause for concern, the willfully ignorant are very easily consolidated into a power base for politicians willing to play to their beliefs. The far right — and, to a somewhat lesser extent, the far left — has learned this to devastating impact. All you have to do is abandon your predilection for telling the truth so you can help them rationalize their deliberate denial of facts. Do this and you have tribal support that is almost impossible to shake.

The move of populist politicians to use the willfully ignorant as a launch pad for their own purposes further amplifies the Butterfly Effect, ensuring that the previously unimaginable will continue to be the new state of normal.

Finally, there is the third factor: our expanding impact on the physical world. It’s not just our degree of connection that technology is changing exponentially. It’s also the degree of impact we have on our physical world.

For almost our entire time on earth, the world has made us. We have evolved to survive in our physical environment, where we have been subject to the whims of nature.

But now, increasingly, we humans are shaping the nature of the world we live in. Our footprint has an ever-increasing impact on our environment, and that footprint is also increasing exponentially, thanks to technology.

The earth and our ability to survive on it are — unfortunately — now dependent on our stewardship. And that stewardship is particularly susceptible to the impact of willful ignorance. In the area of climate change alone, willful ignorance could — and has — led to events with massive consequences. A recent study estimates that climate change is directly responsible for 5 million deaths a year.

For all these reasons, willful ignorance is now something that can have life and death consequences.

Making Sense of Willful Ignorance

Willful ignorance is nothing new. Depending on your beliefs, you could say it was willful ignorance that got Adam and Eve kicked out of the Garden of Eden. But the visibility of it is higher than it’s ever been before. In the past couple of years, we have had a convergence of factors that has pushed willful ignorance to the surface — a perfect storm of fact denial.

Some of those effects include the social media effect, the erosion of traditional journalism and a global health crisis that has us all focusing on the same issue at the same time. The net result of all this is that we all have a very personal interest in the degree of ignorance prevalent in our society.

In one very twisted way, this may be a good thing. As I said, the willfully ignorant have always been with us. But we’ve always been able to shrug and move on, muttering “stupid is as stupid does.”

Now, however, the stakes are getting higher. Our world and society are at a point where willful ignorance can inflict some real and substantial damage. We need to take it seriously and we must start thinking about how to limit its impact.

So, for myself, I’m going to spend some time understanding willful ignorance. Feel free to come along for the ride!

It’s important to understand that willful ignorance is not the same as being stupid — or even just being ignorant, despite thousands of social media memes to the contrary.

Ignorance is one thing. It means we don’t know something. And sometimes, that’s not our fault. We don’t know what we don’t know. But willful ignorance is something very different. It is us choosing not to know something.

For example, I know many smart people who have chosen not to get vaccinated. Their reasons may vary. I suspect fear is a common denominator, and there is no shame in that. But rather than seek information to allay their fears, these folks have doubled down on beliefs based on little to no evidence. They have made a choice to ignore the information that is freely available.

And that’s doubly ironic, because the very same technology that enables willful ignorance has made more information available than ever before.

Willful ignorance is defined as “a decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.”

And this is where the problem lies. The explosion of content has meant there is always information available to support any point of view. We also have the breakdown of journalistic principles that occurred in the past 40 years. Combined, we have a dangerous world of information that has been deliberately falsified in order to appeal to a segment of the population that has chosen to be willfully ignorant.

It seems a contradiction: The more information we have, the more that ignorance is a problem. But to understand why, we have to understand how we make sense of the world.

Making Sense of Our World

Sensemaking is a concept that was first introduced by organizational theorist Karl Weick in the 1970s. The concept has been borrowed by those working in the areas of machine learning and artificial intelligence. At the risk of oversimplification, it provides us a model to help us understand how we “give meaning to our collective experiences.”

D.T. Moore and R. Hoffman, 2011

The above diagram (from a 2011 paper by David T. Moore and Robert R. Hoffman) shows the sensemaking process. It starts with a frame — our understanding of what is true about the world. As we get presented with new data, we have to make a choice: Does it fit our frame or doesn’t it?

If it does, we preserve the frame and may elaborate on it, fitting the new data into it. If the data doesn’t support our existing frame, we then have to reframe, building a new frame from scratch.

Our brains loves frames. It’s much less work for the brain to keep a frame than to build a new one. That’s why we tend to stick with our beliefs — another word for a frame — until we’re forced to discard them.

But, as with all human traits, our ways of making sense of our world vary in the population. In the above diagram, some of us are more apt to spend time on the right side of the diagram, more open to reframing and always open to evidence that may cause us to reframe.

That, by the way, is exactly how science is supposed to work. We refer to this capacity as critical thinking: the objective analysis and evaluation of  data in order to form a judgment, even if it causes us to have to build a new frame.

Others hold onto their frames for dear life. They go out of their way to ignore data that may cause them to have to discard the frames they hold. This is what I would define as willful ignorance.

It’s misleading to think of this as just being ignorant. That would simply indicate a lack of available data. It’s also misleading to attribute this to a lack of intelligence.

That would be an inability to process the data. With willful ignorance, we’re not talking about either of those things. We are talking about a conscious and deliberate decision to ignore available data. And I don’t believe you can fix that.

We fall into the trap of thinking we can educate, shame or argue people out of being willfully ignorant. We can’t. This post is not intended for the willfully ignorant. They have already ignored it. This is just the way their brains work. It’s part of who they are. Wishing they weren’t this way is about as pointless as wishing they were a world-class pole vaulter, that they were seven feet tall or that their brown eyes were blue.

We have to accept that this situation is not going to change. And that’s what we have to start thinking about. Given that we have willful ignorance in the world, what can we do to minimize its impact?