The Long-Term Fallout from MAGA: One Canadian’s Perspective

The other day, an American friend asked how Canada was currently feeling about Trump and the whole MAGA thing. You may remember some months back a number of broadsides towards Canada from the president that seemingly came from nowhere -– Trump threatening/cajoling us to become the 51st state, on again-off again tariffs, continued assertions that the US does not need Canada for anything, completely unveiled threats towards us from Pete Hoekstra, the American Ambassador to Canada.

We took it personally. “Elbows up” became the Canadian rallying cry – a reference to protecting yourself in our beloved national sport – fighting along the boards balanced on frozen water while wearing sharp blades on your feet. Liquor stores had shelf after empty shelf that once were laden with California reds and Kentucky bourbon. Canadian trips to Disneyland and Las Vegas plummeted. Grocery stores started labeling products that (supposedly – which is another story) came from Canada. Canadian consumers and businesses scrambled to find Canadian substitutes for traditional American suppliers.

That was then. What about now?

Trump and the MAGA train have moved on to an endless list of other scandals and dumpster fires. I haven’t heard a whisper of the 51st state for a long time. While our trade war continues on, fueled by shots across the bow from both sides, I think it’s fair to say that we are now just lumped with every other country reeling from the daily bat-shit crazy barrage coming from Washington. Canadians are used to being ignored, for good or bad, so we’re back to situation normal – all F*$%ed up.

But have Canadians moved on? Have we dropped said elbows? The honest answer is – it’s complicated.

Predictably the patriotic fervor we had early this year has cooled off. California reds are back on the shelves. More Canadians are planning to visit Hawaii and Florida this winter. “Grown in the U.S.A.” stickers are back where they belong, in the produce bins at our grocery stores. When it comes to our American habit – it’s like the line from Brokeback Mountain – “We wish we knew how to quit you.”

Like all relationships, the one between the US and Canada is complex. It’s unrealistic to expect a heavily intertwined relationship like ours to disappear overnight. There are probably no two countries in the world more involved with each other’s business than we are. And that cuts both ways, despite what Mr. Trump says. We have been married to each other for a very long time. Even if we want to go through with it, a divorce is going to take some time.

The numbers from the first six months of our “Buy Canadian” campaign are in, and they are less than inspiring. According to StatsCan, 70% of Canadian businesses saw no increase in sales at all. Even with those that did, the impact was minimal and any gain was usually offset by other sales challenges.  

But if you dig a little deeper, there are signs that there might be more long-term damage done here than first meets the eye. In Canadian grocery stores over the past six months, sales of “Made in Canada” products are up 10% while U.S. made goods are down 9%. Those aren’t huge swings, but they have been sustained over 6 months, and in the words of one Canadian analyst speaking on CBC Radio, when something lasts for 6 months, “you’re moving from fad territory to trend territory.”

The dilemma facing Canadians is something called the “Attitude Behavior Gap” – the difference between what we want to do and what we are actually doing. Canadians – 85% of us anyway – want to buy Canadian rather than American, but it’s really hard to do that. Canadian goods are harder to find and typically cost more. It’s the reality of having a trading partner that outnumbers you both in market size and output by a factor of 10 to 1. If we want to have a Ceasar salad in December, we’re going to have to buy lettuce grown in the U.S.

But we are talking relationships here, so let’s relook at that 85% intention to “Buy Canadian” number again. That means that – 6 months after we were insulted – we still feel that a fundamental trust was irrevocably broken. We’re being pragmatic about it, but our intention is clear, we’re looking for alternatives to our past default behavior – buying American. When those alternatives make economic and behavioral sense to us, we’ll find other partners. That is what is happening in Canada right now.

Should Americans care? I believe so. Because I’m sure we’re not the only ones. The world is currently reeling from the sharp American pivot away from being a globally trusted partner. The short-term reality is that we will put up with it for now and pander to the Presidential powers that be, because we have to.

But we’re looking for options. Our dance card is suddenly wide open.

The Raging Ripple Effect of AirBNB

Ripple Effect: the continuing and spreading results of an event or action.

I’m pretty sure Brian Chesky and Joe Gebbia had no idea what they were unleashing when they decided to rent out an air mattress in the front room of their San Francisco apartment in the fall of 2007. The idea made all kinds of sense: there was not a hotel room to be had, there was a huge conference in town and they were perpetually short on their rent. It seemed like the perfect win-win – and, at first, it was.

But then came the Internet. AirBnB was born and would unleash unintended consequences that would change the face of tourism, up-end real estate markets and tear apart neighborhoods in cities around the world..

For the past two decades we have seen the impact of simple ideas that can scale massively thanks to the networked world we live in. In a physical world, there are real world factors that limit growth. Distribution, logistics, production, awareness – each of these critical requirements for growth are necessarily limited by geography and physical reality. 

But in a wired world, sometimes all you need is to provide an intermediary link between two pools of latent potential and the effect is the digital equivalent of an explosion. There is no physical friction to moderate the effect. That’s what AirBnB Did. Chesky and Gebbia’s simple idea became the connection between frustrated travellers who were tired of exorbitant hotel prices and millions of ordinary people who happened to have a spare bed. There was enormous potential on both sides and all AirBNB had to do was facilitate the connection.

AirBnB’s rise was meteoric. After Chesky and Gebbia’s initial brainstorm in 2007, they launched a website the next spring, in 2008. One year later there were hosts in 1700 cities in 100 different countries. Two years after that, AirBnB had hosted their 1 millionth guest and had over 120,000 listings. By 2020, the year Covid threw a pandemic sized spanner in the works of tourism, AirBnB had 5.6 million listings and was heading towards an IPO. 

Surprisingly, though, a global pandemic wasn’t the biggest problem facing AirBnB. There was a global backlash building that had nothing to do with Covid 19. AirBnB’s biggest problem was the unintended ripple effects of Chesky and Gebbia’s simple idea.

Up until the debut of the internet and the virtual rewiring of our world, new business ideas usually grew slowly enough for the world to react to their unintended consequences. As problems emerged, new legislation could be passed, new safeguards could be introduced and new guidelines could be put in place. But when AirBnB grew from a simple idea to a global juggernaut in a decade, things happened too quickly for the physical world to respond. Everything was accelerated: business growth, demand and the impact on both tourism and the communities those tourists were flocking to. 

Before we knew what was happening, tourism had exploded to unsustainable levels, real estate markets went haywire and entire communities were being gutted as their character changed from a traditional neighborhood to temporary housing for wave after wave of tourists. It’s only recently that many cities that were being threatened with the “AirBnB” effect responded with legislation that either banned or severely curtailed short term vacation rentals.

The question is, now that it’s been unleashed, can the damage done by AirBnB be undone? Real estate markets that were artificially fueled by sales to prospective short term rental hosts may eventually find a new equilibrium, but many formerly affordable listings could remain priced beyond the reach of first time home buyers. Will cities deluged by an onslaught of tourism ever regain the charm that made them favored destinations in the first place? Will neighbourhoods that were transformed by owners cashing in on the AirBnB boom ever regain their former character?

In our networked world, the ripples of unintended consequences spread quickly, but their effects may be with us forever.

Why I Hate Marketing

I have had a love-hate relationship with marketing for a long time now. And – I have to admit – lately the pendulum has swung a lot more to the hate side.

This may sound odd coming from someone who was a marketer for the almost all of his professional life. From the time I graduated from college until I retired, I was marketing in one form or the other. That span was almost 40 years. And for that time, I always felt the art of marketing lived very much in an ethical grey zone. When someone asked me to define marketing, I usually said something like this, “marketing is convincing people to buy something they want but probably don’t need.” And sometimes, marketing has to manufacture that “want” out of thin air.

When I switched from traditional marketing to search marketing almost 30 years ago, I felt it aligned a little better with my ethics. At least, with search marketing, the market has already held up their hand and said they wanted something. They had already signaled their intent. All I had to do is create the connection between that intent and what my clients offered. It was all very rational – I wasn’t messing with anyone’s emotions.

But as the ways we can communicate with prospects digitally has exploded, including through the cesspool we call social media, I have seen marketing slip further and further into an ethical quagmire. Emotional manipulation, false claims and games of bait and switch are now the norm rather than the exception in marketing.

Let me give you one example that I’ve run into repeatedly. The way we book a flight has changed dramatically in the last 25 years. It used to be that airline bookings always happened through an agent. But with the creation of online travel agents, travel search tools and direct booking with the airlines, the information asymmetry that had traditionally protected airline profit margins evaporated. Average fare prices plummeted and the airline profits suffered as a result.

Here in Canada, the two major airlines eventually responded to this threat by following the lead of European lo-cost carriers and introduced an elaborate bait and switch scheme. They introduced “ultra-basic” fares (the actual labels may vary) by stripping everything possible in the way of customer comfort from the logistical reality of getting one human body from point A to Point B. There are no carry-on bag allowances, no seat selection, no point collection, no flexibility in booking and no hope of getting a refund or flight credit if your plans change. To add insult to injury, you’re also shuttled into the very last boarding group and squeezed into the most undesirable seats on the plane. The airlines have done everything possible to let you know you are hanging on to the very bottom rung of their customer appreciation ladder.

Now, you may say that this is just another case of “caveat emptor” – it’s the buyer’s responsibility to know what they’re purchasing and set their expectations accordingly. These fares do give passengers the ability to book a bare-bones flight at a much lower cost. It’s just the airlines responding to a market need. And I might agree – if it weren’t for how these fares are used by the airline’s marketers.

With flight tracking tools, you can track flight prices for future trips. These tools will send you an alert when fares change substantially in either direction. This kind of information puts a lot of power in the hands of the customer, but airlines like WestJet and Air Canada use their “Bare Bones” basic fares to game this system.

While it is possible on some tracking tools like Google Flights to set your preferences to exclude “basic” fares, most users stick to the default settings that would include these loss-leader offerings. They then get alerts with what seem to be great deals on flights as the airlines introduce a never-ending stream of seat sales. The airlines know that by reducing the fares on a select few seats for a few days just enough to trigger an alert, they will get a rush of potential flyers that have used a tracker waiting for the right time to book.

As soon as you come to the airline site to book, you see that while a few seats at the lowest basic fare are on sale, the prices on the economy seats that most of us book haven’t budged. In fact, it seems to me that they’ve gone up substantially. On one recent search, the next price level for an economy seat was three times as much as the advertised ultra-basic fare. If you do happen to stick with booking the ultra-basic fare, you are asked multiple times if you’re sure you don’t want to upgrade? With one recent booking, I was asked no fewer than five times if I wanted to pay more before the purchase was complete.

This entire marketing approach feels uncomfortably close to gas lighting. Airline marketers have used every psychological trick in the book to lure you in and then convince you to spend much more than you originally intended. And this didn’t happen by accident. Those marketers sat down in a meeting (actually, probably several meetings) and deliberately plotted out – point by point – the best way to take advantage of their customers and squeeze more money from them. I know, because I’ve been in those meetings. And a lot of you reading this have been too.

 When I started marketing, the goal was to build a long-term mutually beneficial relationship with your customers. Today, much of what passes for marketing is more like preying on a vulnerable prospect in an emotionally abusive relationship.

And I don’t love that.

The Cost of Not Being Curious

The world is having a pandemic-proportioned wave of Ostrichitis.

Now, maybe you haven’t heard of Ostrichitis. But I’m willing to bet you’re showing at least some of the symptoms:

  • Avoiding newscasts, especially those that feature objective and unbiased reporting
  • Quickly scrolling past any online news items in your feed that look like they may be uncomfortable to read
  • Dismissing out of hand information coming from unfamiliar sources

These are the signs of Ostrichitis – or the Ostrich Effect – and I have all of them. This is actually a psychological effect, more pointedly called willful ignorance, which I wrote about a few years ago. And from where I’m observing the world, we all seem to have it to one extent or another.

I don’t think this avoidance of information comes as a shock to anyone. The world is a crappy place right now. And we all seem to have gained comfort from adopting the folk wisdom that “no news is good news.” Processing bad news is hard work, and we just don’t have the cognitive resources to crunch through endless cycles of catastrophic news. If the bad news affirms our existing beliefs, it makes us even madder than what we were. If it runs counter to our beliefs, it forces us to spin up our sensemaking mechanisms and reframe our view of reality. Either way, there are way more fun things to do.

A recent study from the University of Chicago attempted to pinpoint when children started avoid bad news. The research team found that while young children don’t tend to put boundaries around their curiosity, as they age they start avoiding information that challenges their beliefs or their own well-being. The threshold seems to be about 6 years old. Before that, children are actively seeking information of all kinds (as any parent barraged by never ending “Whys” can tell you). After that, chidren start strategizing the types of information they pay attention to.

Now, like everything about humans, curiosity tends to be an individual thing. Some of us are highly curious and some of us avoid seeking new information religiously. But even if we are a curious sort, we may pick and choose what we’re curious about. We may find “safe zones” where we let our curiosity out to play. If things look too menacing, we may protect ourselves by curbing our curiosity.

The unfortunate part of this is that curiosity, in all its forms, is almost always a good thing for humans (even if it can prove fatal to cats).

The more curious we are, the better tied we are to reality. The lens we use to parse the world is something called a sense-making loop. I’ve often referred to this in the past. It’s a processing loop that compares what we experience with what we believe, referred to as our “frame”. For the curious, this frame is often updated to match what we experience. For the incurious, the frame is held on to stubbornly, often by ignoring new information or bending information to conform to their beliefs. A curious brain is a brain primed to grow and adapt. An incurious brain is one that is stagnant and inflexible. That’s why the father of modern-day psychology, William James, called curiosity “the impulse towards better cognition.”

When we think about the world we want, curiosity is a key factor in defining it. Curiosity keeps us moving forward. The lack of curiosity locks us in place or even pushes us backwards, causing the world to regress to a more savage and brutal place. Writers of dystopian fiction knew this. That’s why authors including H.G. Wells, Aldous Huxley, Ray Bradbury and George Orwell all made a lack of curiosity a key part of their bleak future worlds. Our current lack of curiosity is driving our world in the same dangerous direction.

For all these reasons, it’s essential that we stay curious, even if it’s becoming increasingly uncomfortable.

Being in the Room Where It Happens

I spent the past weekend attending a conference that I had helped to plan. As is now often the case, this was a hybrid conference; you could choose to attend in person or online via Zoom. Although it involved a long plane ride, I choose to attend in person. It could be because – as a planner – I wanted to see how the event played out. Also, it’s been a long time since I attended a conference away from my home. Or – maybe – it was just FOMO.

Whatever the reason, I’m glad I was there, in the room.

This was a very small conference planned on a shoestring budget. We didn’t have money for extensive IT support or AV equipment. We were dependent solely on a laptop and whatever sound equipment our host was able to supply. We knew going into the conference that this would make for a less-than-ideal experience for those attending virtually. But – even accounting for that – I found there was a huge gap in the quality of that experience between those that were there and those that were attending online. And, over the duration of the 3-day conference, I observed why that might be so.

This conference was a 50/50 mix of those that already knew each other and those that were meeting each other for the first time. Even those who were familiar with each other tended to connect more often via a virtual meeting platform than in a physical meeting space. I know that despite the convenience and efficiency of being able to meet online, something is lost in the process. After the past two days, carefully observing what was happening in the room we were all in, I have a better understanding of what that loss might be – it was the vague and inexact art of creating a real bond with another person.

In that room, the bonding didn’t happen at the speaking podium and very seldom happened during the sessions we so carefully planned. It seeped in on the sidelines, over warmed-over coffee from conference centre urns, overripe bananas and the detritus of the picked over pastry tray. The bonding came from all of us sharing and digesting a common experience. You could feel a palpable energy in the room. You could pick up the emotion, read the body language and tune in to the full bandwidth of communication that goes far beyond what could be transmitted between an onboard microphone and a webcam.

But it wasn’t just the sharing of the experience that created the bonds. It was the digesting of those experiences after the fact. We humans are herding animals, and that extends to how we come to consensus about things we go through together. We do so through communication with others – not just with words and gesture, but also through the full bandwidth of our evolved mechanisms for coming to a collective understanding. It wasn’t just that a camera and microphone couldn’t transmit that effectively, it was that it happened where there was no camera or mic.

As researchers have discovered, there is a lived reality and a remembered reality and often, they don’t look very much alike. The difference between the effectiveness of an in-person experience and one accessed through an online platform shouldn’t come as a surprise to us. This is due to how our evolved sense-making mechanisms operate. We make sense of reality both internally, through a comparison with our existing cognitive models and externally, through interacting with others around us who have shared that same reality. This communal give-and-take colors what we take with us, in the form of both memories and an updated model of what we know and believe. When it comes to how humans are built, collective sense making is a feature, not a bug.

I came away from that conference with much more than the content that was shared at the speaker dais. I also came away with a handful of new relationships, built on sharing an experience and, through that, laying down the first foundations of trust and familiarity. I would not hesitate to reach out to any of these new friends if I had a question about something or a project I felt they could collaborate on.

I think that’s true largely because I was in the room where it happened.

Saying Goodbye to our Icons

It’s been a tough couple of months for those of us who grew up in the 60s and 70s. Last month, we had to say goodbye to Robert Redford, and then, just over a week ago, we bid farewell to Diane Keaton.

It’s always sobering to lose those cultural touchstones of our youth. It brings us to forcibly reckon with our own mortality. Our brains play that maudlin math, “I remember them being young when I was young, so they can’t be that much older than me.”  We tend to conflate the age difference between us and those we watch when we’re young, so when they’re gone, we naturally wonder how much time we have left.

This makes it hard to lose any of the icons of our youth, but these two – for me – felt different: sadder, more personal. It was like I had lost people I knew.

I know there are many who swooned for Bobby Redford. But I know first-hand that an entire generation of male (and possibly female) adolescents had a crush on Diane Keaton’s Annie Hall. Her breakout role was one of those characters that carved a permanent place in our psyche. “Annie Hall-esque” became a descriptor we could all immediately understand – quirky, cute, with insecurities that were rendered as charming. We all wanted to be her port in a storm.

Diane Keaton and Robert Redford seemed like people we could know, given the chance. If circumstances ever caused our paths to cross, we felt we could have a real conversation with them. We could talk about meaningful things and perhaps connect on a personal level. There was depth below the celebrity and the heart of a real person beating there. We may have just known them through a screen – but they used those platforms to build a connection that felt real and human.

I wondered what it was about these two – in particular – that made the connection real. It was something that went beyond their talent, although their talent was undeniable. One only has to watch an example of Keaton’s master acting with Al Pacino in The Godfather: Part Two. After a visit with her estranged children, she is being pushed out the door before ex-husband Michael Corleone comes home, but he walks in while she’s still standing in the doorway. No words are said between the two for almost a minute. Everything is conveyed just by their expressions. It’s a scene that still rips my heart out.

It was also not about celebrity. In fact, Redford and Keaton both eschewed the life of a celebrity. Robert Redford found his life away from Hollywood in the ranch lands of Montana and Diane Keaton – well – in typical Keaton fashion, she just kind of ignored being a celebrity. In an interview with Vanity Fair in 1985, she said, ““I think I like to deny it (being famous). It suits me to deny it. It’s more comfortable for me to deny it, but I suppose that’s another one of my problems. Look, I don’t think it’s such a big deal. I don’t think I’m that big a thing.”

So, if it wasn’t their talent or their celebrity status, what was it about Keaton and Redford that forged such a strong bond with many of us? I think it may have been three things.

First, it was about consistency. They were judicious about what they shared with us but what they did choose to share was rock solid and reliable. Whatever was at the core of who they were – it shone through their performances. There was a foundation to each Redford and Keaton performance that was both essential and relatable. You couldn’t imagine anyone else playing these roles. 

The authenticity of their humanness was another factor. Robert Redford’s acting style was restrained and typically underplayed, but his charismatic good looks sometimes got in the way of the depth and vulnerability he tried to bring to his performances. He famously tried out for the title role in 1967’s The Graduate (which went to Dustin Hoffman) but was turned down by director Mike Nichols because he couldn’t see Redford as a believable “loser.” “Let’s put it this way,” Nichols reportedly said, “Have you ever struck out with a girl?” “What do you mean?” Redford replied.

Keaton was a little different. She embodied vulnerability in every role she played. She wasn’t perfect, and that was the point. We loved her imperfections. The characters Diane Keaton played were neither aspirational nor cautionary, they were revelatory. We connected with them, because we could see ourselves in them.

Finally, we knew there was depth to both Diane Keaton and Robert Redford. They believed passionately in things and weren’t afraid to speak out on behalf of those beliefs. I would have loved to have a conversation with either of them about serious things, because I feel I would have walked away with a perspective worth discovering.

It’s sadly ironic that for two icons who shared so much screen time with us, they never shared it with each other. They were tentatively scheduled to appear in a 2012 Holiday comedy but it never made it to the screen.

I will miss having both Robert Redford and Diane Keaton in my world. They made it better.

Lilith Fair: A Quarter Century and A Different World Ago

Lilith Fair: Building a Mystery, a new documentary released on Hulu (CBC Gem in Canada), is much more than a chronicle of a music festival. It’s a very timely statement on the both the strength and fragility of community.

Lilith Fair was the festival launched in 1997 by Canadian singer/songwriter Sarah McLachlan. It was conceived as a feminine finger in the eye of a determinedly misogynistic music industry. At the end of the 90’s, despite a boom in talented female singer songwriters (Tracy Chapman, Jewel, Paula Cole, Sheryl Crow, Natalie Merchant, Shawn Colvin, Lisa Loeb, Suzanne Vega and others too numerous to mention), radio stations wouldn’t run two songs by women back-to-back. They also wouldn’t book two women on the same concert ticket. The feeling, based on nothing other than male intuition, was that it would be too much “femininity” for the audience to handle.

McLachlan, in her charmingly polite Canadian way, said “Fudge you!” and launched her own festival. The first one, in 1997, played almost 40 concerts over 51 days across North America. The line-up was exclusively female – 70 singers in all playing on three stages. Almost every concert sold out. Apparently, there was an audience for female talent. Lilith Fair would be repeated in 1998 and 1999, with both tours being smashing successes.

The World needed Lilith Fair in the late 90s. It wasn’t only the music industry that was misogynistic and homophobic. It was our society. The women who played Lilith Fair found a community of support unlike anything they had ever experienced in their careers. Performers who had been feeling isolated for years suddenly found support and – more than anything – understanding.

It was women who made the rules and ran the Lilith Fair show. It was okay to perform when you were 8 months pregnant. It was okay to hold your baby onstage as you performed the group encore. It was okay to bring the whole family on tour and let the kids play backstage while you did your set. These were things that were – up until then – totally foreign in the music industry. It was the very definition of community – diverse people having something in common and joining together to deal from a position of strength.

But it didn’t happen overnight. It took a while – and a lot of bumping into each other backstage – for the community to gel. It also needed a catalyst, which turned out to be Amy Ray and Emily Saliers – officially known as the Indigo Girls. It was their out-going friendliness that initially broke the ice “because we were so gay and so puppy dog-like.”

This sense of community extended beyond the stage to the thousands who attended: men and women, old and young, straight and gay. It didn’t matter – Lilith Fair was a place where you would be accepted and understood. As documentary producer Dan Levy (of Schitt’s Creek fame) – who was 12 years old when he attended and was yet to come out – said, “Being there was one of the earliest memories I’ve had of safety.”

The unity and inclusiveness of Lilith Fair stood in stark contrast to another festival of the same era – Woodstock 99. There, toxic masculinity from acts like Limp Bizkit singer Fred Durst and Kid Rock, swung the vibe of the event heavily towards anarchy and chaos rather than community.

But while Lilith Fair showed the importance of community, it also showed how fragile it could be. The festival became the butt of jokes on late night television (including one particularly cringe-worthy one by Jay Leno about Paula Cole’s body hair) and those that sought to diminish its accomplishments and importance. Finally, at the end of the 1999 tour, McLachlan had had enough. The last concert was played in the rain at Edmonton, Alberta on August 31st.

McLachlan did try to revive Lilith Fair in 2010, but it was a complete failure. Whatever lightening in a bottle she had captured the first time was gone. The world had passed it by. The documentary didn’t dwell on this other than offering a few reasons why this might be. Perhaps Lilith Fair wasn’t needed anymore. Maybe it had done its job. After all, women had mounted some of the top tours of that time, including Taylor Swift, Madonna, Pink and Lady Gaga.

Or maybe it had nothing to do with the industry. Maybe it had everything to do with us, the audience.

The world of 1999 was very different place than the world of 2010. Community was in the midst of being redefined from those sharing a common physical location to those sharing a common ideology in online forums. And that type of community didn’t require a coming together. If anything, those types of communities kept us apart, staring at a screen – alone in our little siloes.

According to the American Time Use Survey, the time spent in-person socializing has been on a steady decline since 2000.  This is especially true for those under the age of 25, the prime market for musical festivals. When we did venture forth to see a concert, we are looking for spectacle, not community. This world was moving too fast for the coalescing of the slow, sweet magic that made Lilith Fair so special.

At the end of the documentary, Sarah McLachlan made it clear that she’ll never attempt to bring Lilith Fair back to life. It was a phenomenon of that time. And that is sad – sad indeed.

When Did the Future Become So Scary?

The TWA hotel at JFK airport in New York gives one an acute case of temporal dissonance. It’s a step backwards in time to the “Golden Age of Travel” – the 1960s. But even though you’re transported back 60 years, it seems like you’re looking into the future. The original space – the TWA Flight Center – was designed in 1962 by Eero Saarinen. This was a time when America was in love with the idea of the future. Science and technology were going to be our saving grace. The future was going to be a utopian place filled with flying jet cars, benign robots and gleaming, sexy white curves everywhere.  The TWA Flight Center was dedicated to that future.

It was part of our love affair with science and technology during the 60s. Corporate America was falling over itself to bring the space-age fueled future to life as soon as possible. Disney first envisioned the community of tomorrow that would become Epcot. Global Expos had pavilions dedicated to what the future would bring. There were four World Fairs over 12 years, from 1958 to 1970, each celebrating a bright, shiny white future. There wouldn’t be another for 22 years.

This fascination with the future was mirrored in our entertainment. Star Trek (pilot in 1964, series start in 1966) invited all of us to boldly go where no man had gone before, namely a future set roughly three centuries from then.   For those of us of a younger age, the Jetsons (original series from 1963 to 64) indoctrinated an entire generation into this religion of future worship. Yes, tomorrow would be wonderful – just you wait and see!

That was then – this is now. And now is a helluva lot different.

Almost no one – especially in the entertainment industry – is envisioning the future as anything else than an apocalyptic hell hole. We’ve done an about face and are grasping desperately for the past. The future went from being utopian to dystopian, seemingly in the blink of an eye. What happened?

It’s hard to nail down exactly when we went from eagerly awaiting the future to dreading it, but it appears to be sometime during the last two decades of the 20th Century. By the time the clock ticked over to the next millennium, our love affair was over. As Chuck Palahniuk, author of the 1999 novel Invisible Monsters, quipped, “When did the future go from being a promise to a threat?”

Our dread about the future might just be a fear of change. As the future we imagined in the 1960’s started playing out in real time, perhaps we realized our vision was a little too simplistic. The future came with unintended consequences, including massive societal shifts. It’s like we collectively told ourselves, “Once burned, twice shy.” Maybe it was the uncertainty of the future that scared the bejeezus out of us.

But it could also be how we got our information about the impact of science and technology on our lives. I don’t think it’s a coincidence that our fear of the future coincided with the decline of journalism. Sensationalism and endless punditry replaced real reporting just about the time we started this about face. When negative things happened, they were amplified. Fear was the natural result. We felt out of control and we keep telling ourselves that things never used to be this way.  

The sum total of all this was the spread of a recognized psychological affliction called Anticipatory Anxiety – the certainty that the future is going to bring bad things down upon us. This went from being a localized phenomenon (“my job interview tomorrow is not going to go well”) to a widespread angst (“the world is going to hell in a handbasket”). Call it Existential Anticipatory Anxiety.

Futurists are – by nature – optimists. They believe things well be better tomorrow than they are today. In the Sixties, we all leaned into the future. The opposite of this is something called Rosy Retrospection, and it often comes bundled with Anticipatory Anxiety. It is a known cognitive bias that comes with a selective memory of the past, tossing out the bad and keeping only the good parts of yesterday. It makes us yearn to return to the past, when everything was better.

That’s where we are today. It explains the worldwide swing to the right. MAGA is really a 4-letter encapsulation of Rosy Retrospection – Make America Great Again! Whether you believe that or not, it’s a message that is very much in sync with our current feelings about the future and the past.

As writer and right-leaning political commentator William F. Buckley said, “A conservative is someone who stands athwart history, yelling Stop!”

It’s Tough to Consume Conscientiously

It’s getting harder to be both a good person and a wise consumer.

My parents never had this problem when I was a kid. My dad was a Ford man. Although he hasn’t driven for 10 years, he still is. If you grew up in the country, your choices were simple – you needed a pickup truck. And in the 1960s and 70s, there were only three choices: Ford, GMC or Dodge. For dad, the choice was Ford – always.

Back then, brand relationships were pretty simple. We benefited from the bliss of ignorance. Did the Ford Motor Company do horrible things during that time? Absolutely. As just one example, they made a cost-benefit calculation and decided to keep the Pinto on the road even though they knew it tended to blow up when hit from the rear. There is a corporate memo saying – in black and white – that it would be cheaper to settle the legal claims of those that died than to fix the problem. The company was charged for negligent homicide. It doesn’t get less ethical than that.

But that didn’t matter to Dad. He either didn’t know or didn’t care. The Pinto Problem, along with the rest of the shady stuff done by the Ford Motor Company, including bribes, kickbacks and improper use of corporate funds by Henry Ford II, was not part of Dad’s consumer decision process. He still bought Ford. And he still considered himself a good person. The two things had little to do with each other.

Things are harder now for consumers. We definitely have more choice, and those choices are harder, because we know more.  Even buying eggs becomes an ethical struggle. Do we save a few bucks, or do we make some chicken’s life a little less horrible?

Let me give you the latest example from my life. Next year, we are planning to take our grandchildren to a Disney theme park. If our family has a beloved brand, it would be Disney. The company has been part of my kids’ lives in one form or another since they were born and we all want it to be part of their kid’s lives as well.

Without getting into the whole debate, I personally have some moral conflicts with some of Disney’s recent corporate decisions. I’m not alone. A Facebook group for those planning a visit to this particular park has recently seen posts from those agonizing over the same issue. Does taking the family to the park make us complicit in Disney’s actions that we may not agree with? Do we care enough to pull the plug on a long-planned park visit?

This gets to the crux of the issue facing consumers now – how do we balance our beliefs about what is wrong and right with our desire to consume? Which do we care more about? The answer, as it turns out, seems to almost always be to click the buy button as we hold our noses.

One way to make that easier is to tell ourselves that one less visit to a Disney mark will make virtually no impact on the corporate bottom line. Depriving ourselves of a long-planned family experience will make no difference. And – individually – this is true. But it’s exactly this type of consumer apathy which, when aggregated, allows corporations to get away with being bad moral characters.

Even if we want to be more ethically deliberate in our consumer decisions, it’s hard to know where to draw the line. Where are we getting our information about corporate behavior from? Can it be trusted? Is this a case of one regrettable action, or is there a pattern of unethical conduct? These decisions are always complex, and coming to any decision that involves complexity is always tricky.

To go back to a simpler time, my grandmother had a saying that she applied liberally to any given situation, “What does all this have to do with the price of tea in China?” Maybe she knew what was coming.

There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.