When the News Hits Home

My, how things have changed.

My intention was to write a follow up to last week’s post about Canada’s Bill C-18 and Meta’s banning of news on Facebook. I suppose this is a follow up of sorts. But thanks to Mother Nature – that ofttimes bully – that story was pushed right out of the queue to be replaced with something far more tragic and immediate.

To me, anyway.

I live in Kelowna. Chances are you’ve heard about my home in the last few days. If you haven’t, I can tell you that when I look out my window, all I can see is thick smoke. Which may be a good thing. Last Friday, when I could see, I spent the entire evening watching West Kelowna, across Okanagan Lake from my home, burn in the path of the oncoming McDougall Creek Wildfire. As the flames would suddenly leap towards the sky, you knew that was someone’s home being ignited.

We don’t know how many homes have been lost. The fire has been too active for authorities to have the time to count. We have firefighters and first responders pouring in from around our province to help. . Our Air Quality Index is 11 on a scale of 10, as bad as it can get. Thousands are out of their home. More thousands have their things packed by the door, ready to leave at a moment’s notice. We’re one of those.

But that’s enough about the fire. This post is about our weird relationship with the news.

When something like this happens, you have a very real, very visceral need to know what’s going on. For those of us that live here in British Columbia, the news has hit home in a way we could never imagine. A few posts ago, I said it might be healthier for me to ignore the news, because it’s always alarming and very seldom relevant to me. Well, those words are now coming back to haunt me.

This disaster has thrown our reliance on Facebook for new into stark relief. This last Friday, Canada’s Transportation Minster, Pablo Rodriguez, asked Meta to reverse its current ban on news, “We’ve seen that, throughout this emergency, Canadians have not had access to the crucial information they need. So, I ask Meta to reverse its decision, allow Canadians to have access to news on their platforms.”

But there’s another dimension to this that’s a bit more subtle yet even more frightening. It goes to the heart of how we handle crisis. I think you necessarily must “zoom in,” performing some type of terrible triage in your mind to be able to imagine the unimaginable. As the winds shift the fire away from your home, there’s relief. But other homes now lie in the path of the fire. In your head, you know that, but emotionally you can’t help but feel a lift. It’s not noble, but it’s human.

So let’s “zoom out” – a lot. We’re not the only ones this is happening to. This is a global crisis. Twenty-six thousand people are evacuated on the Spanish island of Tenerife. A friend of mine, who’s an airline pilot, was one week ago volunteering to fly people out of Maui who had lost their homes in the tragic Lahaina fire.

Take a look at Nasa’s FIRMS (Fire Information for Resource Management) website, which gives a global map of all hotspots from wildfires burning. I’ve set this link to wildfire activity in the last 7 days.

Scary as hell, right?

But can we actually process that, in a way that lets us move forward and start coping with this massive issue? Is it enough to change our behaviors in the way we must to finally start addressing climate change?

In a recent article on BBC.com, Richard Fisher talks about “Construal level theory” – which says that the greater the psychological distance there is between the news and your life, the less likely it is to make you change your behavior. For me, the psychological distance between myself and climate change is roughly 1 kilometer (just over half a mile) as the crow flies. That’s how far it is from my house to the nearest evacuation alert area.

It doesn’t get much closer than that.  But will we change? Will anything change?

I’m not so sure. We’ve been through this before. Exactly 20 years ago, the Okanagan Mountain wildfire raged through Kelowna, displacing over 30,000 people and destroying 239 homes. It was a summer much like this, at the time the driest summer on record. This year, we have smashed that record, as we have many times since that fire. Once we picked up, rebuilt our homes and got back to life, nothing really changed.

And now, here we are again. Let’s hope that this time is different.

Search and ChatGPT – You Still Can’t Get There From Here

I’m wrapping up my ChatGPTrilogy with a shout out to an old friend that will be familiar to many Mediaposters – Aaron Goldman. 13 years ago Aaron wrote a book called Everything I Know About Marketing I Learned from Google.  Just a few weeks ago, Aaron shared a post entitled “In a World of AI, is Everything I Know about Marketing (still) Learned from Google”. In it, he looked at the last chapter of the book, which he called Future-Proofing. Part of that chapter was based on a conversation Aaron and I had back in 2010 about what search might look like in the future.

Did we get it right? Well, remarkably, we got a lot more right than we got wrong, especially with the advent of Natural Language tools such as ChatGPT and virtual assistants like Siri.

We talked a lot about something I called “app-sistants”. I explained, “the idea of search as a destination is an idea whose days are numbered. The important thing won’t be search. It will be the platform and the apps that run on it. The next big thing will be the ability to seamlessly find just the right app for your intent and utilize it immediately.” In this context, “the information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

To be honest, this evolution in search has taken a lot longer than I thought back then, “Intent will be more fully supported from end to end. Right now, we have to keep our master ‘intent’ plan in place as we handle the individual tasks on the way to that intent.”

Searching for complex answers as it currently sits requires a lot of heavy lifting. In that discussion, I used the example of planning a trip.  “Imagine if there were an app that could keep my master intent in mind for the entire process. It would know what my end goal was, would be tailored to understand my personal preferences and would use search to go out and gather the required information. When we look at alignment of intent, [a shift from search to apps is] a really intriguing concept for marketers to consider.”

So, the big question is, do we have such a tool? Is it ChatGPT? I decided to give it a try and see. After feeding ChatGPT a couple of carefully crafted prompts about a trip I’d like to take to Eastern Europe someday, I decided the answer is no. We’re not quite there yet. But we’re closer.

After a couple of iterations, ChatGPT did a credible job of assembling a potential itinerary of a trip to Croatia and Slovenia. It even made me aware of some options I hadn’t run across in my previous research. But it left me hanging well short of the “app-ssistant” I was dreaming of in 2010. Essentially, I got a suggestion but all the detail work to put it into an actual trip still required me to do hundreds of searches in various places.

The problem with ChatGPT is that it gets stuck between the millions of functionality siloes – or “walled gardens” – that make up the Internet. Those “walled gardens” exist because they represent opportunities for monetization. In order for an app-ssistant to be able to multitask and make our lives easier, we need a virtual “commonage” that gets rid of some of these walls. And that’s probably the biggest reason we haven’t seen a truly useful iteration of the functionality I predicted more than a decade ago.

This conflict between capitalism and the concept of a commonage goes back at least to the Magna Carta. As England’s economy transitioned from feudalism to capitalism, enclosure saw the building of fences and the wiping out of lands held as a commonage. The actual landscape became a collection of walled gardens that the enforced property rights of each parcel and the future production value of those parcels.

This history, which played out over hundreds of years, was repeated and compressed into a few decades online. We went from the naïve idealism of a “free for all” internet in the early days to the balkanized patchwork of monetization siloes that currently make up the Web.

Right now, search engines are the closest thing we have to a commonage on the virtual landscape. Search engines like Google can pull data from within many gardens, but if we actually try to use the data, we won’t get far before we run into a wall.

To go back to the idea of trip planning, I might be able to see what it costs to fly to Rome or what the cost of accommodations in Venice is on a search engine, but I can’t book a flight or reserve a room. To do that, I have to visit an online booking site. If I’m on a search engine, I can manually navigate this transition fairly easily. But it would stop something like ChatGPT in its tracks.

When I talked to Aaron 13 years ago, I envisioned search becoming a platform that lived underneath apps which could provide more functionality to the user. But I also was skeptical about Google’s willingness to do this, as I stated in a later post here on Mediapost.  In that post, I thought that this might be an easier transition for Microsoft.

Whether it was prescience or just dumb luck, it is indeed Microsoft taking the first steps towards integrating search with ChatGPT, through its recent integration with Bing. Expedia (who also has Microsoft DNA in its genome) has also taken a shot at integrating ChatGPT in a natural language chat interface.

This flips my original forecast on its head. Rather than the data becoming common ground, it’s the chat interface that’s popping up everywhere. Rather than tearing down the walls that divide the online landscape, ChatGPT is being tacked up as window decoration on those walls.

I did try planning that same trip on both Bing and Expedia. Bing – alas – also left me well short of my imagined destination. Expedia – being a monetization site to begin with – got me a little closer, but it still didn’t seem that I could get to where I wanted to go.

I’m sorry to say search didn’t come nearly as far as I hoped it would 13 years ago. Even with ChatGPT thumbtacked onto the interface, we’re just not there yet.

(Feature Image: OpenAI Art generated from the prompt: “A Van Gogh painting of a chatbot on a visit to Croatia”)

It Took a Decade, but Google Glass is Finally Broken

Did you hear that Google finally pulled the plug on Google Glass?

Probably not. The announcement definitely flew under the radar. It came with much less fanfare than the original roll out in 2013. The technology, which has been quietly on life support as an enterprise tool aimed at select industries, finally had its plug pulled with this simple statement on its support page:

Thank you for over a decade of innovation and partnership. As of March 15, 2023, we will no longer sell Glass Enterprise Edition. We will continue supporting Glass Enterprise Edition until September 15, 2023.

Talk about your ignoble demises. They’re offering a mere 6 months of support for those stubbornly hanging on to their Glass. Glass has been thrown in the ever growing Google Graveyard, along with Google Health, Google+, Google Buzz, Google Wave, Knol – well, you get the idea.

It’s been 10 years, almost to the day, that Google invited 8000 people to become “Glass Explorers” (others had a different name – “Glassholes”) and plunge into the world of augmented reality.

I was not a believer – for a few reasons I talked about way back then. That led me to say, “Google Glass isn’t an adoptable product as it sits.” It took 10 years, but I can finally say, “I told you so.”

I did say that wearable technology, in other forms, would be a game changer. I just didn’t think that Google Glass was the candidate to do that. To be honest, I haven’t really thought that much more about it until I saw the muted news that this particular Glass was a lot more than half empty. I think there are some takeaways about the fading dividing line between technology and humans that we should keep in mind.

First of all, I think we’ve learned a little more about how our brains work with “always on” technologies like Google Glass. The short answer is, they don’t – at least not very well. And this is doubly ironic because according to an Interview with Google Glass product director Steve Lee on The Verge back in 2013, that was the whole point:

“We all know that people love to be connected. Families message each other all the time, sports fanatics are checking live scores for their favorite teams. If you’re a frequent traveler you have to stay up to date on flight status or if your gate changes. Technology allows us to connect in that way. A big problem right now are the distractions that technology causes.”

The theory was that it was much less distracting to have information right in the line of sight, rather than having to go to a connected screen that might be in your pocket.

Lee went on. “We wondered, what if we brought technology closer to your senses? Would that allow you to more quickly get information and connect with other people but do so in a way — with a design — that gets out of your way when you’re not interacting with technology? That’s sort of what led us to Glass.” 

The problem here was one of incompatible operating systems – the one that drove Google Glass and the one we have baked into our brains. It turned out that maybe the technology was a little too close to our senses. A 2016 study (Lewis and Neider) found that trying to split attention between two different types of tasks – one scanning information on a heads up display and one trying to focus on the task at hand – ended up with the brain not being able to focus effectively on either. The researchers ended with this cautionary conclusion: “Our data strongly suggest that caution should be exercised when deploying HUD-based informational displays in circumstances where the primary user task is visual in nature. Just because we can, does not mean we should.”

For anyone who spends even a little time wondering how the brain works, this should not come as a surprise. There is an exhaustive list of research showing that the brain is not that great at multi-tasking. Putting a second cognitive task for the brain in our line of sight simply means the distraction is all that much harder to ignore.

Maybe there’s a lesson here for Google. I think sometimes they get a little starry eyed about their own technological capabilities and forget to factor in the human element. I remember talking to a roomful of Google engineers more than a decade ago about search behaviors. I remember asking them if any of them had heard about Pirolli and Card’s pioneering work on their Information Foraging theory. Not one hand went up. I was gob smacked. That should be essential reading for anyone working on a search interface. Yet, on that day, the crickets were chirping loudly at Mountainview.

If the Glass team had done their human homework, they would have found that the brain needs to focus on one task at a time. If you’re looking to augment reality with additional information, that information has to be synthesized into a single cohesive task for the brain. This means that for augmented reality to be successful, the use case has to be carefully studied to make sure the brain isn’t overloaded.

But I suspect there was another sticking point that prevented Google Glass from being widely adopted. It challenged the very nature of our relationship with technology. We like to believe we control technology, rather than the other way around. We have defined the online world as somewhere we “go” to through our connected devices. We are in control of when and where we do this. Pulling a device out and initiating an action keeps this metaphorical divide in place.

But Google Glass blurred this line in a way that made us uncomfortable. Again, a decade ago, I talked about the inevitable tipping point that will come with the merging of our physical and virtual worlds. Back then, I said, “as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being ‘online’ will cease to be about ‘going’ and will become more about ‘being.’  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.”

I’m just speculating, but maybe Google Glass was just a step too far in this direction – for now, anyway.

(Feature image: Tim.Reckmann, CC BY-SA 3.0 https://creativecommons.org/licenses/by-sa/3.0, via Wikimedia Commons)

The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

Making Sense of Willful Ignorance

Willful ignorance is nothing new. Depending on your beliefs, you could say it was willful ignorance that got Adam and Eve kicked out of the Garden of Eden. But the visibility of it is higher than it’s ever been before. In the past couple of years, we have had a convergence of factors that has pushed willful ignorance to the surface — a perfect storm of fact denial.

Some of those effects include the social media effect, the erosion of traditional journalism and a global health crisis that has us all focusing on the same issue at the same time. The net result of all this is that we all have a very personal interest in the degree of ignorance prevalent in our society.

In one very twisted way, this may be a good thing. As I said, the willfully ignorant have always been with us. But we’ve always been able to shrug and move on, muttering “stupid is as stupid does.”

Now, however, the stakes are getting higher. Our world and society are at a point where willful ignorance can inflict some real and substantial damage. We need to take it seriously and we must start thinking about how to limit its impact.

So, for myself, I’m going to spend some time understanding willful ignorance. Feel free to come along for the ride!

It’s important to understand that willful ignorance is not the same as being stupid — or even just being ignorant, despite thousands of social media memes to the contrary.

Ignorance is one thing. It means we don’t know something. And sometimes, that’s not our fault. We don’t know what we don’t know. But willful ignorance is something very different. It is us choosing not to know something.

For example, I know many smart people who have chosen not to get vaccinated. Their reasons may vary. I suspect fear is a common denominator, and there is no shame in that. But rather than seek information to allay their fears, these folks have doubled down on beliefs based on little to no evidence. They have made a choice to ignore the information that is freely available.

And that’s doubly ironic, because the very same technology that enables willful ignorance has made more information available than ever before.

Willful ignorance is defined as “a decision in bad faith to avoid becoming informed about something so as to avoid having to make undesirable decisions that such information might prompt.”

And this is where the problem lies. The explosion of content has meant there is always information available to support any point of view. We also have the breakdown of journalistic principles that occurred in the past 40 years. Combined, we have a dangerous world of information that has been deliberately falsified in order to appeal to a segment of the population that has chosen to be willfully ignorant.

It seems a contradiction: The more information we have, the more that ignorance is a problem. But to understand why, we have to understand how we make sense of the world.

Making Sense of Our World

Sensemaking is a concept that was first introduced by organizational theorist Karl Weick in the 1970s. The concept has been borrowed by those working in the areas of machine learning and artificial intelligence. At the risk of oversimplification, it provides us a model to help us understand how we “give meaning to our collective experiences.”

D.T. Moore and R. Hoffman, 2011

The above diagram (from a 2011 paper by David T. Moore and Robert R. Hoffman) shows the sensemaking process. It starts with a frame — our understanding of what is true about the world. As we get presented with new data, we have to make a choice: Does it fit our frame or doesn’t it?

If it does, we preserve the frame and may elaborate on it, fitting the new data into it. If the data doesn’t support our existing frame, we then have to reframe, building a new frame from scratch.

Our brains loves frames. It’s much less work for the brain to keep a frame than to build a new one. That’s why we tend to stick with our beliefs — another word for a frame — until we’re forced to discard them.

But, as with all human traits, our ways of making sense of our world vary in the population. In the above diagram, some of us are more apt to spend time on the right side of the diagram, more open to reframing and always open to evidence that may cause us to reframe.

That, by the way, is exactly how science is supposed to work. We refer to this capacity as critical thinking: the objective analysis and evaluation of  data in order to form a judgment, even if it causes us to have to build a new frame.

Others hold onto their frames for dear life. They go out of their way to ignore data that may cause them to have to discard the frames they hold. This is what I would define as willful ignorance.

It’s misleading to think of this as just being ignorant. That would simply indicate a lack of available data. It’s also misleading to attribute this to a lack of intelligence.

That would be an inability to process the data. With willful ignorance, we’re not talking about either of those things. We are talking about a conscious and deliberate decision to ignore available data. And I don’t believe you can fix that.

We fall into the trap of thinking we can educate, shame or argue people out of being willfully ignorant. We can’t. This post is not intended for the willfully ignorant. They have already ignored it. This is just the way their brains work. It’s part of who they are. Wishing they weren’t this way is about as pointless as wishing they were a world-class pole vaulter, that they were seven feet tall or that their brown eyes were blue.

We have to accept that this situation is not going to change. And that’s what we have to start thinking about. Given that we have willful ignorance in the world, what can we do to minimize its impact?

Imagine a Pandemic without Technology

As the writer of a weekly post that tends to look at the intersection between human behavior and technology, the past 18 months have been interesting – and by interesting, I mean a twisted ride through gut-wrenching change unlike anything I have ever seen before.

I can’t even narrow it down to 18 months. Before that, there was plenty more that was “unprecedented” – to berrypick a word from my post from a few weeks back. I have now been writing for MediaPost in one place or another for 17 years. My very first post was on August 19, 2004. That was 829 posts ago. If you add the additional posts I’ve done for my own blog – outofmygord.com – I’ve just ticked over 1,100 on my odometer.  That’s a lot of soul searching about technology. And the last several months have still been in a class by themselves.

Now, part of this might be where my own head is at. Believe it or not, I do sometimes try to write something positive. But as soon as my fingers hit the keyboard, things seem to spiral downwards. Every path I take seems to take me somewhere dark. There has been precious little that has sparked optimism in my soul.

Today, for example, prior to writing this, I took three passes at writing something else. Each quickly took a swerve towards impending doom. I’m getting very tired of this. I can only imagine how you feel, reading it.

So I finally decided to try a thought experiment. “What if,” I wondered, “we had gone through the past 17 months without the technology we take for granted? What if there was no Internet, no computers, no mobile devices? What if we had lived through the Pandemic with only the technology we had – say – a hundred years ago, during the global pandemic of the Spanish Flu starting in 1918? Perhaps the best way to determine the sum total contribution of technology is to do it by process of elimination.”

The Cons

Let’s get the negatives out of the way. First, you might say that technology enabled the flood of misinformation and conspiracy theorizing that has been so top-of-mind for us. Well, yes – and no.

Distrust in authority is nothing new. It’s always been there, at one end of a bell curve that spans the attitudes of our society. And nothing brings the outliers of society into global focus faster than a crisis that affects all of us.

There was public pushback against the very first vaccine ever invented; the smallpox vaccine. Now granted, the early method was to rub puss from a cowpox blister into a cut in your skin and hope for the best. But it worked. Smallpox is now a thing of the past.

And, if we are talking about pushback against public health measures, that’s nothing new either. Exactly the same thing happened during the 1918-1919 Pandemic. Here’s one eerily familiar excerpt from a journal article looking at the issue, “Public-gathering bans also exposed tensions about what constituted essential vs. unessential activities. Those forced to close their facilities complained about those allowed to stay open. For example, in New Orleans, municipal public health authorities closed churches but not stores, prompting a protest from one of the city’s Roman Catholic priests.”

What is different, thanks to technology, is that public resistance is so much more apparent than it’s ever been before. And that resistance is coming with faces and names we know attached. People are posting opinions on social media that they would probably never say to you in a face-to-face setting, especially if they knew you disagreed with them. Our public and private discourse is now held at arms-length by technology. Gone are all the moderating effects that come with sharing the same physical space.

The Pros

Try as I might, I couldn’t think of another “con” that technology has brought to the past 17 months. The “pro” list, however, is far too long to cover in this post, so I’ll just mention a few that come immediately to mind.

Let’s begin with the counterpoint to the before-mentioned “Con” – the misinformation factor. While misinformation was definitely spread over the past year and a half, so was reliable, factual information. And for those willing to pay attention to it, it enabled us to find out what we needed to in order to practice public health measures at a speed previously unimagined. Without technology, we would have been slower to act and – perhaps – fewer of us would have acted at all. At worst, in this case technology probably nets out to zero.

But technology also enabled the world to keep functioning, even if it was in a different form. Working from home would have been impossible without it. Commercial engines kept chugging along. Business meetings switched to online platforms. The Dow Jones Industrial Average, as of the writing of this, is over 20% higher than it was before the pandemic. In contrast, if you look at stock market performance over the 1918 – 1919 pandemic, the stock market was almost 32% lower at the end of the third wave as it was at the start of the first. Of course, there are other factors to consider, but I suspect we can thank technology for at least some of that.

It’s easy to point to the negatives that technology brings, but if you consider it as a whole, technology is overwhelmingly a blessing.

What was interesting to me in this thought experiment was how apparent it was that technology keeps the cogs of our society functioning more effectively, but if there is a price to be paid, it typically comes at the cost of our social bonds.

Marketers and Funnel Vision

Two roads diverged in a wood, and I—
I took the one less traveled by,
And that has made all the difference. 

The Road Not Taken by Robert Frost

A couple of years ago, I saw an essay by Elijah Meeks, former Data Visualization Society executive director, about how “We Live in a World of Funnels.” It started out like this:

“You think you’re reading an essay. You’re not. You’re moving through a funnel. This shouldn’t surprise you. You’ve been moving through funnels all day.”

No, we haven’t.

Sorry, Elijah, but the world is not built of funnels. Funnels are arbitrary lenses, invented by marketers, that are applied after the fact. They have nothing to do with how we live our lives. They’re a fabrication — a tool designed to help simplify real-world data and visualize, one that’s been so compelling that we have focused on it to the exclusion of everything that lives outside of it.  

We don’t live in a world of funnels. We live in a world that’s a maze of diverse and complex potential paths. At each intersection we reach, we have to make choices. For a marketer, that seems like a daunting thing to analyze. The funnel model simplifies our job by relying on successful conversions as the gold standard and working backwards from there. By relying on a model of a funnel, we can only examine “the road taken” and try to optimize the hell out of it. We never consider the “road not taken.”

Indeed, Robert Frost’s poem, from which I borrowed a few lines to start this post, is the ultimate misunderstanding of funnels. It is, by most who have read it, considered the ultimate funnel analysis, a look back at what came from choosing the “road less traveled.” But as reviewer David Orr pointed out in this post, it’s at least as much about what might have happened outside of the “funnel” we all try to apply to the poem:

“Because the poem isn’t ‘The Road Less Traveled.’ It’s ‘The Road Not Taken.’ And the road not taken, of course, is the road one didn’t take—which means that the title passes over the ‘less traveled’ road the speaker claims to have fol­lowed in order to foreground the road he never tried. The title isn’t about what he did; it’s about what he didn’t do. Or is it? 

The funnel model is inherently constraining in its perspective. You are forced to look backward through the tiny hole at the bottom and speculate on what prevented others from getting to that point.

Why do we do this? Because, initially anyway, it seems easier than other choices. It’s like the old joke about finding the inebriated man outside a bar looking for his car keys under the streetlight. When asked where exactly he lost them, he points behind him to a dark alley.

“Why are you looking for them here then?”

“The light’s better here.”

It certainly seems the light is better in a funnel. We can track activity within the funnel. But how do you track what happens outside of it?  It may seem like a hopeless task, but it doesn’t have to be. There are universals in human behavior that can be surprisingly predictive.

Take B2B buying, for example. When we did the research for the Buyersphere Project, our “a-ha” moment was realizing what a massive role risk had in the decision process.

Prior to this research, we — like every other marketer — relied on the funnel model. Our CRM software had funnel analysis built into it. So did our website traffic tracking tool. Funnels were indeed pervasive — but not in the real world, just in the world of marketing.

But we made a decision at the earliest stage of our research project. We tossed aside the funnel premise and started at the other end, understanding what happens when a potential buyer hits what Google calls the ZMOT: the Zero Moment of Truth. This is defined as “the moment in the buying process when the consumer researches a product prior to purchase.” When we started asking people about the Moment — or the moments before the ZMOT — we found that in B2B, risk-avoidance trumps all else. And it gave us an entirely different view of the buying journey we would never have seen from inside the funnel.

We also realized we were dealing with multiple definitions of risk, depending on whose risk it was. In the implementation of a new technology solution, the risk definition of the person who would be using the solution is completely different than that of the procurement officer who will be overseeing the purchase process.

All this led to a completely different interpretation of buying motivation — one driven by emotions. If you can understand those emotional factors, you can start to understand the choices made at each intersection. It lets us see things beyond the bounds of the “funnel.”

Marketing funnels are a model — not the real world. And as statistician George Box said, “all models are wrong, but some are useful.” I do believe the funnel can be useful, but we just have to understand that there’s so much you can’t see from inside the funnel.

The Crazy World of Our Media Obsessions

Are you watching the news less? Me too. Now that the grownups are back in charge, I’m spending much less time checking my news feed.

Whatever you might say about the last four years, it certainly was good for the news business. It was one long endless loop of driving past a horrific traffic accident. Try as we might, we just couldn’t avoid looking.

But according to Internet analysis tool Alexa.com, that may be over. I ran some traffic rank reports for major news portals and they all look the same: a ramp-up over the past 90 days to the beginning of February, and then a precipitous drop off a cliff.

While all the top portals have a similar pattern, it’s most obvious on Foxnews.com.

It was as if someone said, “Show’s over folks. There’s nothing to see here. Move along.” And after we all exhaled, we did!

Not surprisingly, we watch the news more when something terrible is happening. It’s an evolved hardwired response called negativity bias.

Good news is nice. But bad news can kill you. So it’s not surprising that bad news tends to catch our attention.

But this was more than that. We were fixated by Trump. If it were just our bias toward bad news, we would still eventually get tired of it.

That’s exactly what happened with the news on COVID-19. We worked through the initial uncertainty and fear, where we were looking for more information, and at some point moved on to the subsequent psychological stages of boredom and anger. As we did that, we threw up our hands and said, “Enough already!”

But when it comes to Donald Trump, there was something else happening.

It’s been said that Trump might have been the best instinctive communicator to ever take up residence in the White House. We might not agree with what he said, but we certainly were listening.

And while we — and by we, I mean me — think we would love to put him behind us, I believe it behooves us to take a peek under the hood of this particular obsession. Because if we fell for it once, we could do it again.

How the F*$k did this guy dominate our every waking, news-consuming moment for the past four years?

We may find a clue in Bob Woodward’s book on Trump, Rage. He explains that he was looking for a “reflector” — a person who knew Trump intimately and could provide some relatively objective insight into his character.

Woodward found a rather unlikely candidate for his reflector: Trump’s son-in-law, Jared Kushner.

I know, I know — “Kushner?” Just bear with me.

In Woodward’s book, Kushner says there were four things you needed to read and “absorb” to understand how Trump’s mind works.

The first was an op-ed piece in The Wall Street Journal by Peggy Noonan called “Over Trump, We’re as Divided as Ever.” It is not complimentary to Trump. But it does begin to provide a possible answer to our ongoing fixation. Noonan explains: “He’s crazy…and it’s kind of working.”

The second was the Cheshire Cat in Alice in Wonderland. Kushner paraphrased: “If you don’t know where you’re going, any path will get you there.” In other words, in Trump’s world, it’s not direction that matters, it’s velocity.

The third was Chris Whipple’s book, The Gatekeepers: How the White House Chiefs of Staff Define Every Presidency. The insight here is that no matter how clueless Trump was about how to do his job, he still felt he knew more than his chiefs of staff.

Finally, the fourth was Win Bigly: Persuasion in a World Where Facts Don’t Matter, by Scott Adams. That’s right — Scott Adams, the same guy who created the “Dilbert” comic strip. Adams calls Trump’s approach “Intentional Wrongness Persuasion.”

Remember, this is coming from Kushner, a guy who says he worships Trump. This is not apologetic. It’s explanatory — a manual on how to communicate in today’s world. Kushner is embracing Trump’s instinctive, scorched-earth approach to keeping our attention focused on him.

It’s — as Peggy Noonan realized — leaning into the “crazy.”  

Trump represented the ultimate political tribal badge. All you needed to do was read one story on Trump, and you knew exactly where you belonged. You knew it in your core, in your bones, without any shred of ambiguity or doubt. There were few things I was as sure of in this world as where I stood on Donald J. Trump.

And maybe that was somehow satisfying to me.

There was something about standing one side or the other of the divide created by Trump that was tribal in nature.

It was probably the clearest ideological signal about what was good and what was bad that we’ve seen for some time, perhaps since World War II or the ’60s — two events that happened before most of our lifetimes.

Trump’s genius was that he somehow made both halves of the world believe they were the good guys.

In 2018, Peggy Noonan said that “Crazy won’t go the distance.” I’d like to believe that’s so, but I’m not so sure. There are certainly others that are borrowing a page from Trump’s playbook.  Right-wing Republicans Marjorie Taylor Greene and Lauren Boebert are both doing “crazy” extraordinarily well. The fact that almost none of you had to Google them to know who they are proves this.

Whether we’re loving to love, or loving to hate, we are all fixated by crazy.

The problem here is that our media ecosystem has changed. “Crazy” used to be filtered out. But somewhere along the line, news outlets discovered that “crazy” is great for their bottom lines.

As former CBS Chairman and CEO Leslie Moonves said when Trump became the Republican Presidential forerunner back in 2016, “It may not be good for America, but it’s damned good for CBS.”

Crazy draws eyeballs like, well, like crazy. It certainly generates more user views then “normal” or “competent.”

In our current media environment  — densely intertwined with the wild world of social media — we have no crazy filters. All we have now are crazy amplifiers.

And the platforms that allow this all try to crowd on the same shaky piece of moral high ground.

According to them, it’s not their job to filter out crazy. It’s anti-free speech. It’s un-American. We should be smart enough to recognize crazy when we see it.

Hmmm. Well, we know that’s not working.

The Ebbs and Flows of Consumerism in a Post-Pandemic World

As MediaPost’s Joe Mandese reported last Friday, advertising was, quite literally, almost decimated worldwide in 2020. If you look at the forecasts of the top agency holding companies, ad spends were trimmed by an average of 6.1%. It’s not quite one dollar in 10, but it’s close.

These same companies are forecasting a relative bounceback in 2021, starting slow and accelerating quarter by quarter through the year — but that still leaves the 2021 spend forecast back at 2018 levels.

And as we know, everything about 2021 is still very much in flux. If the year 2021 was a pack of cards, almost every one of them would be wild.

This — according to physician, epidemiologist and sociologist Nicholas Christakis — is not surprising.

Christakis is one of my favorite observers of network effects in society. His background in epidemiological science gives him a unique lens to look at how things spread through the networks of our world, real and virtual. It also makes him the perfect person to comment on what we might expect as we stagger out of our current crisis.

In his latest book, “Apollo’s Arrow,” he looks back to look forward to what we might expect — because, as he points out, we’ve been here before.

While the scope and impact of this one is unusual, such health crises are nothing new. Dozens of epidemics and a few pandemics have happened in my lifetime alone, according to this Wikipedia chart.

This post goes live on Groundhog Day, perhaps the most appropriate of all days for it to run. Today, however, we already know what the outcome will be. The groundhog will see its shadow and there will be six more months (at least) of pandemic to deal with. And we will spend that time living and reliving the same day in the same way with the same routine.

Christakis expects this phase to last through the rest of this year, until the vaccines are widely distributed, and we start to reach herd immunity.

During this time, we will still have to psychologically “hunker down” like the aforementioned groundhog, something we have been struggling with. “As a society we have been very immature,” said Christakis. “Immature, and typical as well, we could have done better.”

This phase will be marked by a general conservatism that will go in lockstep with fear and anxiety, a reluctance to spend and a trend toward risk aversion and religion.

Add to this the fact that we will still be dealing with widespread denialism and anger, which will lead to a worsening vicious circle of loss and crisis. The ideological cracks in our society have gone from annoying to deadly.

Advertising will have to somehow negotiate these choppy waters of increased rage and reduced consumerism.

Then, predicts Christakis, starting some time in 2022, we will enter an adjustment period where we will test and rethink the fundamental aspects of our lives. We will be learning to live with COVID-19, which will be less lethal but still very much present.

We will likely still wear masks and practice social distancing. Many of us will continue to work from home. Local flare-ups will still necessitate intermittent school and business closures. We will be reluctant to be inside with more than 20 or 30 people at a time. It’s unlikely that most of us will feel comfortable getting on a plane or embarking on a cruise ship. This period, according to Christakis, will last for a couple years.

Again, advertising will have to try to thread this psychological needle between fear and hope. It will be a fractured landscape on which to build a marketing strategy. Any pretense of marketing to the masses, a concept long in decline, will now be truly gone. The market will be rife with confusing signals and mixed motivations. It will be incumbent on advertisers to become very, very good at “reading the room.”

Finally, starting in 2024, we will have finally put the pandemic behind us. Now, says Christakis, four years of pent-up demand will suddenly burst through the dam of our delayed self-gratification. We will likely follow the same path taken a century ago, when we were coming out of a war and another pandemic, in the period we call the “Roaring Twenties.”

Christakis explained: “What typically happens is people get less religious. They will relentlessly seek out social interactions in nightclubs and restaurants and sporting events and political rallies. There’ll be some sexual licentiousness. People will start spending their money after having saved it. They’ll be joie de vivre and a kind of risk-taking, a kind of efflorescence of the arts, I think.”

Of course, this burst of buying will be built on the foundation of what came before. The world will likely be very different from its pre-pandemic version. It will be hard for marketers to project demand in a straight line from what they know, because the experiences they’ve been using as their baseline are no longer valid. Some things may remain the same, but some will be changed forever.

COVID-19 will have pried many of the gaps in our society further apart — most notably those of income inequality and ideological difference. A lingering sense of nationalism and protectionism born from dealing with a global emergency could still be in place.

Advertising has always played an interesting role in our lives. It both motivates and mirrors us.

But the reflection it shows is like a funhouse mirror: It distorts some aspects of our culture and ignores others. It creates demand and hides inconvenient truths. It professes to be noble, while it stokes the embers of our ignobility. It amplifies the duality of our human nature.

Interesting times lie ahead. It remains to be seen how that is reflected in the advertising we create and consume.

Have More People Become More Awful?

Is it just me, or do people seem a little more awful lately? There seems to be a little more ignorance in the world, a little less compassion, a little more bullying and a lot less courtesy.

Maybe it’s just me.

It’s been a while since I’ve checked in with eternal optimist Steven Pinker.  The Harvard psychologist is probably the best-known proponent of the argument that the world is consistently trending towards being a better place.  According to Pinker, we are less bigoted, less homophobic, less misogynist and less violent. At least, that’s what he felt pre-COVID lockdown. As I said, I haven’t checked in with him lately, but I suspect he would say the long-term trends haven’t appreciably changed. Maybe we’re just going through a blip.

Why, then, does the world seem to be going to hell in a hand cart?  Why do people — at least some people — seem so awful?

I think it’s important to remember that our brain likes to play tricks on us. It’s in a never-ending quest to connect cause and effect. Sometimes, to do so, the brain jumps to conclusions. Unfortunately, it is aided in this unfortunate tendency by a couple of accomplices — namely news reporting and social media. Even if the world isn’t getting shittier, it certainly seems to be. 

Let me give you one example. In my local town, an anti-masking rally was recently held at a nearby shopping mall. Local news outlets jumped on it, with pictures and video of non-masked, non-socially distanced protesters carrying signs and chanting about our decline into Communism and how their rights were being violated.

What a bunch of boneheads — right? That was certainly the consensus in my social media circle. How could people care so little about the health and safety of their community? Why are they so awful?

But when you take the time to unpack this a bit, you realize that everyone is probably overplaying their hands. I don’t have exact numbers, but I don’t think there were more than 30 or 40 protestors at the rally. The population of my city is about 150,000. These protestors represented .03% of the total population. 

Let’s say for every person at the rally, there were 10 that felt the same way but weren’t there. That’s still less than 1%. Even if you multiplied the number of protesters by 100, it would still be just 3% of my community. We’re still talking about a tiny fraction of all the people who live in my city. 

But both the news media and my social media feed have ensured that these people are highly visible. And because they are, our brain likes to use that small and very visible sample and extrapolate it to the world in general. It’s called availability bias, a cognitive shortcut where the brain uses whatever’s easy to grab to create our understanding of the world.

But availability bias is nothing new. Our brains have always done this. So, what’s different about now?

Here, we have to understand that the current reality may be leading us into another “mind-trap.” A 2018 study from Harvard introduced something called “prevalence-induced concept change,” which gives us a better understanding of how the brain focuses on signals in a field of noise. 

Basically, when signals of bad things become less common, the brain works harder to find them. We expand our definition of what is “bad” to include more examples so we can feel more successful in finding them.

I’m probably stretching beyond the limits of the original study here, but could this same thing be happening now? Are we all super-attuned to any hint of what we see as antisocial behavior so we can jump on it? 

If this is the case, again social media is largely to blame. It’s another example of our current toxic mix of dog whistlecancel culturevirtue signaling, pseudo-reality that is being driven by social media. 

That’s two possible things that are happening. But if we add one more, it becomes a perfect storm of perceived awfulness. 

In a normal world, we all have different definitions of the ethical signals we’re paying attention to. What you are focused on right now in your balancing of what is right and wrong is probably different from what I’m currently focused on. I may be thinking about gun control while you’re thinking about reducing your carbon footprint.

But now, we’re all thinking about the same thing: surviving a pandemic. And this isn’t just some theoretical mind exercise. This is something that surrounds us, affecting us every single day. When it comes to this topic, our nerves have been rubbed raw and our patience has run out. 

Worst of all, we feel helpless. There seems to be nothing we can do to edge the world toward being a less awful place. Behaviors that in another reality and on another topic would have never crossed our radar now have us enraged. And, when we’re enraged, we do the one thing we can do: We share our rage on social media. Unfortunately, by doing so, we’re not part of the solution. We are just pouring fuel on the fire.

Yes, some people probably are awful. But are they more awful than they were this time last year? I don’t think so. I also can’t believe that the essential moral balance of our society has collectively nosedived in the last several months. 

What I do believe is that we are living in a time where we’re facing new challenges in how we perceive the world. Now, more than ever before, we’re on the lookout for what we believe to be awful. And if we’re looking for it, we’re sure to find it.