Why Technology May Not Save Us

We are a clever race. We’re not as smart as we think we are, but we are pretty damn smart. We are the only race who has managed to forcibly shift the eternal cycles of nature for our own benefit. We have bent the world to our will. And look how that’s turning out for us.

For the last 10,000 years our cleverness has set us apart from all other species on earth. For the last 1000 years, the pace of that cleverness has accelerated. In the last 100 years, it has been advancing at breakneck speed. Our tools and ingenuity have dramatically reshaped our lives. our everyday is full of stuff we couldn’t imagine just a few short decades ago.

That’s a trend that’s hard to ignore. And because of that, we could be excused for thinking the same may be true going forward. When it comes to thinking about technology, we tend to do so from a glass half full perspective. It’s worked for us in the past. It will work for us in the future. There is no problem too big that our own technological prowess cannot solve.

But maybe it won’t. Maybe – just maybe – we’re dealing with another type of problem now to which technology is not well suited as a solution. And here are 3 reasons why.

The Unintended Consequences Problem

Technology solutions focus on the proximate rather than the distal – which is a fancy way of saying that technology always deals with the task at hand. Being technology, these solutions usually come from an engineer’s perspective, and engineers don’t do well with nuance. Complicated they can deal with. Complexity is another matter.

I wrote about this before when I wondered why tech companies tend to be confused by ethics. It’s because ethics falls into a category of problems known as a wicked problem. Racial injustice is another wicked problem. So is climate change. All of these things are complex and messy. Their dependence on collective human behavior makes them so. Engineers don’t like wicked problems, because they are by definition concretely non-solvable. They are also hotbeds of unintended consequences.

In Collapse, anthropologist Jared Diamond’s 2005 exploration of failed societies, past and present, Diamond notes that when we look forward, we tend to cling to technology as a way to dodge impending doom. But he notes, “underlying this expression of faith is the implicit assumption that, from tomorrow onwards, technology will function primarily to solve existing problems and will cease to create new problems.”

And there’s the rub. For every proximate solution it provides, technology has a nasty habit of unleashing scads of unintended new problems. Internal combustion engines, mechanized agriculture and social media come to mind immediately as just three examples. The more complex the context of the problem, the more likely it is that the solution will come with unintended consequences.

The 90 Day Problem

Going hand in hand with the unintended consequence problem is the 90 Day problem. This is a port-over from the corporate world, where management tends to focus on problems that can be solved in 90 days. This comes from a human desire to link cause and effect. It’s why we have to-do lists. We like to get shit done.

Some of the problems we’re dealing with now – like climate change – won’t be solved in 90 days. They won’t be solved in 90 weeks or even 90 months. Being wicked problems, they will probably never be solved completely. If we’re very, very, very lucky and we start acting immediately and with unprecedented effort, we might be seeing some significant progress in 90 years.

This is the inconvenient truth of these problems. The consequences are impacting us today but the payoff for tackling them is – even if we do it correctly – sometime far in the future, possibly beyond the horizon of our own lifetimes. We humans don’t do well with those kinds of timelines.

The Alfred E. Neuman Problem

The final problem with relying on technology is that we think of it as a silver bullet. The alternative is a huge amount of personal sacrifice and effort with no guarantee of success. So, it’s easier just to put our faith in technology and say, “What, Me Worry?” like Mad Magazine mascot Alfred E. Neuman. It’s much easier to shift the onus for us surviving our own future to some nameless, faceless geek somewhere who’s working their way towards their “Eureka” moment.

While that may be convenient and reassuring, it’s not very realistic. I believe the past few years – and certainly the past few months – have shown us that all of us have to make some very significant changes in our lives and be prepared to rethink what we thought our future might be. At the very least, it means voting for leadership committed to fixing problems rather than ignoring them in favor of the status quo.

I hope I’m wrong, but I don’t think technology is going to save our ass this time.

Do We Still Need Cities?

In 2011, Harvard economist Edward Glaeser called the city “man’s greatest invention” in his book “Triumph of the City,” noting that “there is a near-perfect correlation between urbanization and prosperity across nations.”

Why is this so? It’s because historically we needed a critical mass of connection in order to accelerate human achievement.  Cities bring large numbers of people into closer, more frequent and productive contact than other places.  This direct, face-to-face contact is critical for facilitating the exchange of knowledge and ideas that lead to the next new venture business, medical discovery or social innovation.

This has been true throughout our history. While cities can be messy and crowded, they also spin off an amazing amount of ingenuity and creativity, driving us all forward.

But the very same things that make cities hot beds of productive activity also make them a human petri dish in the midst of a pandemic.

Example: New York

If the advantages that Glaeser lists are true for cities in general, it’s doubly true for New York, which just might be the greatest city in the world. Manhattan’s population density is 66,940 people per square mile, which makes it the highest of any area in the U.S. It’s also diverse, with 36% of population foreign-born. It attracts talent in all types of fields from around the world.

Unfortunately, all these things also set New York up to be particularly hard hit by COVID-19. To date, according to Google’s tracker, it has 236,000 confirmed cases of COVID-19 and a mortality rate of 10%. That case rate would put it ahead of all but 18 countries in the world. What has made New York great has also made it tragically vulnerable to a pandemic.

New York is famous for its gritty resilience. But at least one New Yorker thinks this might be the last straw for the Big Apple. In an essay entitled “New York City is dead forever,” self-published and then reprinted by the New York Post, comedy club owner James Altucher talks about how everyone he knows is high-tailing it out of town for safer, less crowded destinations, leaving a ghost town in their wake.

He doesn’t believe they’re coming back. The connections that once relied on physical proximity can now be replicated by technology. Not perfectly, perhaps, but well enough. Certainly, well enough to tip the balance away from the compromises you have to be prepared to swallow to live in a city like New York: higher costs of living, exorbitant real estate, higher crime rates and the other grittier, less-glittery sides of living in a crowded, dense metropolis.


Example: Silicon Valley

So, perhaps tech is partly (or largely) to blame for the disruption to the interconnectedness of cities. But, ironically, thanks to COVID-19, the same thing is happening to the birthplace of tech, Silicon Valley and the Bay area of Northern California.

Barb is a friend of mine who was born in Canada but has lived much of her life in Palo Alto, California — a stone’s throw from the campus of Stanford University. She recently beat a temporary retreat back to her home and native land north of the 49th Parallel.  When Barb explained to her Palo Alto friends and neighbors why Canada seemed to be a safer place right now, she explained it like this,

“My county — Santa Clara — with a population of less than 2 million people, has had almost as many COVID cases in the last three weeks as the entire country of Canada.”

She’s been spending her time visiting her Canadian-based son and exploring the natural nooks and crannies of British Columbia while doing some birdwatching along the way.  COVID-19 is just one of the factors that has caused her to start seriously thinking about life choices she couldn’t have imagined just a few short years ago. As Barb said to me as we chatted, “I have a flight home booked — but as it gets closer to that date, it’s becoming harder and harder to think about going back.”  

These are just two examples of the reordering of what will become the new normal. Many of us have retreated in search of a little social distance from what our lives were. Increasingly, we are relying on tech to bridge the distances that we are imposing between ourselves and others. Breathing room — in its most literal sense — has become our most immediate priority.

This won’t change anytime soon. We can expect this move to continue for at least the next year. It could be — and I suspect it will be — much longer. Perhaps James Altucher is right. Could this pandemic – aided and abetted by tech – finally be what kills mankind’s greatest invention? As he writes in his essay,

“Everyone has choices now. You can live in the music capital of Nashville, you can live in the ‘next Silicon Valley’ of Austin. You can live in your hometown in the middle of wherever. And you can be just as productive, make the same salary, have higher quality of life with a cheaper cost.”

If Altucher is right, there’s another thing we need to think about. According to Glaeser, cities are not only great for driving forward innovation. They also put some much-needed distance between us and nature:

“We human are a destructive species. We tend to destroy stuff when we’re around it. And if you love nature, stay away from it.”

As we look to escape one crisis, we might be diving headlong into the next.

What Would Aaron Do?

I am a big Aaron Sorkin fan. And before you rain on my parade, I say that fully understanding that he epitomizes the liberal intellectual elitist, sanctimonious cabal that has helped cleave American culture in two. I get that. And I don’t care.

I get that his message is from the left side of the ideological divide. I get that he is preaching to the choir. And I get that I am part of the choir. Still, given the times, I felt that a little Sorkin sermon was just what I needed. So I started rewatching Sorkin’s HBO series “The Newsroom.”

If you aren’t part of this particular choir, let me bring you up to speed. The Newsroom in this case is at the fictional cable network ACN. One of the primary characters is lead anchor Will McEvoy (played by Jeff Daniels), who has built his audience by being noncontroversial and affable — the Jay Leno of journalism. 

This brings us to the entrance of the second main character: Mackenzie McHale, played by Emily Mortimer. Exhausted from years as an embedded journalist covering multiple conflicts in Afghanistan, Pakistan and Iraq, she comes on board as McEvoy’s new executive producer (and also happens to be his ex-girlfriend). 

In typical Sorkin fashion, she goads everyone to do better. She wants to reimagine the news by “reclaiming journalism as an honorable profession,” with “civility, respect, and a return to what’s important; the death of bitchiness; the death of gossip and voyeurism; speaking truth to stupid.”

I made it to episode 3 before becoming profoundly sad and world-weary. Sorkin’s sermon from 2012—– just eight years ago —  did not age well. It certainly didn’t foreshadow what was to come. 

Instead of trying to be better, the news business — especially cable news — has gone in exactly the opposite direction, heading straight for Aaron Sorkin’s worst-case scenario. This scenario formed part of a Will McEvoy speech in that third episode: “I’m a leader in an industry that miscalled election results, hyped up terror scares, ginned up controversy, and failed to report on tectonic shifts in our country — from the collapse of the financial system to the truths about how strong we are to the dangers we actually face.”

That pretty much sums up where we are. But even Sorkin couldn’t anticipate what horrors social media would throw into the mix. The reality is actually worse than his worst-case scenario. 

Sorkin’s appeal for me was that he always showed what “better” could be. That was certainly true in his breakthrough political hit “The West Wing.” 

He brought the same message to the jaded world of journalism in “The Newsroom. He was saying, “Yes, we are flawed people working in a flawed system set in a flawed nation. But it can be better….Our future is in our hands. And whatever that future may be, we will be held accountable for it when it happens.”

This message is not new. It was the blood and bones of Abraham Lincoln’s annual address to Congress on December 1, 1862, just one month before the Emancipation Proclamation was signed into law. Lincoln was preparing the nation for the choice of a path which may have been unprecedented and unimaginably difficult, but would ultimately be proven to be the more moral one: “It is not ‘can any of us imagine better?’ but, ‘can we all do better?’ The dogmas of the quiet past, are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise — with the occasion.”

“The Newsroom” was Sorkin’s last involvement with a continuing TV series. He was working on his directorial movie debut, “Molly’s World,” when Trump got elected. 

Since then, he has adapted Harper Lee’s “To Kill a Mockingbird” for Broadway, with “The Newsroom’”s Jeff Daniels as Atticus Finch. 

Sorkin being Sorkin, he ran into a legal dispute with Lee’s estate when he updated the source material to be a little more open about the racial tension that underlies the story. Aaron Sorkin is not one to let sleeping dogmas lie. 

Aaron Sorkin also wrote a letter to his daughter and wife on the day after the 2016 election, a letter than perhaps says it all. 

It began, “Well the world changed late last night in a way I couldn’t protect us from.”

He was saying that as a husband and father. But I think it was a message for us all — a message of frustration and sadness. He closed the letter by saying “I will not hand [my daughter] a country shaped by hateful and stupid men. Your tears last night woke me up, and I’ll never go to sleep on you again.”

Yes, Sorkin was preaching when he was scripting “The Newsroom.” But he was right. We should do better. 

In that spirit, I’ll continue to dissect the Reuters study on the current state of journalism I mentioned last week. And I’ll do this because I think we have to hold our information sources to “doing better.” We have to do a better job of supporting those journalists that are doing better. We have to be willing to reject the “dogmas of the quiet past.” 

One of those dogmas is news supported by advertising. The two are mutually incompatible. Ad-supported journalism is a popularity contest, with the end product a huge audience custom sliced, diced and delivered to advertisers — instead of a well-informed populace.

We have to do better than that.

The Potential Woes of Working from Home

Many of you have now had a few months under your belt working virtually from home rather than going to the office. At least some of you are probably considering continuing to do so even after COVID recedes and the all clear is given to return to normal. A virtual workplace makes all kinds of rational sense – both for employees and employers. But there are irrational reasons why you might want to think twice before you fully embrace going virtual.

About a decade ago, my company also went with a hybrid virtual/physical workplace. As the CEO, there was a lot I liked about it. It was a lot more economical than leasing more office space. It gave us the flexibility to recruit top talent in areas where we had no physical presence. And it seemed that technology was up to the task of providing the communication and work-flow tools we needed to support our virtual members.

On the whole, our virtual employees also seemed to like it. It gave them more flexibility in their workday. It also made it less formal. If you wanted to work in pajamas and bunny slippers, so be it. And with a customer base spread across many time zones, it also made it easier to shift client calls to times that were mutually acceptable.

It seemed to be a win-win. For awhile. Then we noticed that all was not wonderful in work-from-home land.

I can’t say productivity declined. We were always a results-based workplace so as long as the work got done, we were happy. But we started to feel a shift in our previously strong corporate culture. We found team-member complaints about seemingly minor things skyrocket. We found less cohesion across teams. Finally – and most critically – it started to impact our relationships with our customers.

Right about the time all this was happening, we were acquired by a much bigger company. One of the dictates that was handed down from the new owners was that we establish physical offices and bring our virtual employees back to the mothership for the majority of their work-week. At the time, I wasn’t fully aware of the negative consequences of going virtual so I initially fought the decision. But to be honest, I was secretly happy. I knew something wasn’t quite right. I just wasn’t sure what it was. I suspected it might have been our new virtual team members.

The move back to a physical workplace was a tough one. Our virtual team members were very vocal about how this was a loss of their personal freedom. New HR fires were erupting daily and I spent much of my time fighting them. This, combined with the inevitable cultural consequences of being acquired, often made me shake my head in bewilderment. Life in our company was turning into a shit-show.

I wish I could say that after we all returned to the same workplace, we joined hands and sang a rousing chorus of Kumbaya. We didn’t. The damage had been done. Many of the disgruntled former virtual team members ended up moving on. The cultural core of the company remained with our original team members who had worked in the same office location for several years. I eventually completed my contract and went my own way.

I never fully determined what the culprit was. Was it our virtual team members? Or was it the fact that we embraced a virtual workplace without considering unintended consequences. I suspected it was a little of both.

Like I said, that was a decade ago. From a rational perspective, all the benefits of a virtual workplace seem even more enticing than they did then. But in the last 10 years, there has been research done on those irrational factors that can lead to the cracks in a corporate culture that we experienced.

Mahdi Roghanizad is an organizational behavior specialist from Ryerson University in Toronto. He has long looked at the limitations of computerized communication. And his research provides a little more clarity into our failed experiment with a virtual workplace.

Roghanizad has found that without real-life contact, the parts of our brain that provide us with the connections needed to build trust never turn on. In order to build a true relationship with another person, we need something called the Theory of Mind. According to Wikipedia, “Theory of mind is necessary to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.”

But unless we’re physically face-to-face with another person, our brain doesn’t engage in this critical activity. “Eye contact is required to activate that theory of mind and when the eye contact is not there, the whole other signal information is not processed by our brain,” said Roghanizad. Even wearing a pair of sunglasses is enough to short circuit the process. Relegating contact to a periodic Zoom call guarantees that this empathetic part of our brains will never kick in.

But it’s not just being eye-ball to eye-ball. There are other non-verbal cues we rely on to connect with other people and create a Theory of Mind. Other research has shown the importance of pheromones and physical gestures like crossing your arms and leaning forward or back. This is why we subconsciously start to physically imitate people we’re talking to. The stronger the connection with someone, the more we imitate them.

This all comes back to the importance of bandwidth in the real world. A digital connection cannot possibly incorporate all the nuance of a face-to-face connection. And whether we realize it or not, we rely on that bandwidth to understand other people. From that understanding comes the foundations of trusted relationships. And trusted relationships are the difference between a high-functioning work team and a dysfunctional one.

I wish I knew that ten years ago.

Is the Marketing Industry Prepared for What Lies Ahead?

It was predictable. Humans are starting to do what humans do. We are beginning to shift gears, working our way through the stages of shock. We are daring to look beyond today and wondering what tomorrow might be like. Very smart people, like Sapiens author Yuval Noah Harari, are concerned about what we may trade away in the teeth of this crisis.

Others, like philosopher Barbara Muraca, climate change advocate Greta Thurnberg and Media Spin’s own Kaila Colbin,  are hoping that this might represent a global reset moment for us. Perhaps this will finally break our obsession with continual year after year growth, fueled by our urges to acquire and consume. We in the advertising and marketing business kept dumping gas on this unsustainable dumpster fire. There is hope by some – including myself – that COVID-19 will be a kind of shock therapy, convincing us to take a kinder, gentler approach to both the planet and each other.

My own crystal ball gazing is on a much-reduced scale. Specifically, I’m wondering what advertising and marketing might be like in our immediate future. I started by looking back at what history can teach us about recovery from a crisis.

Both World Wars resulted in explosions of consumerism. One could probably make the argument that the consumerism that happened after World War II has continued pretty much uninterrupted right to the current day. We basically spent our way out of the dotcom implosion of 1999 – 2002 and the Great Recession of 2007 – 2009.

But will this be different? I think it will, for three reasons.

One, both World Wars repressed consumer demand for a matter of years. With World War I it was four years, plus another 3 marked by the Spanish Flu Pandemic and a brief but sharp recession as the economy had to shift gears from wartime to peace time. With World War II, it was 6 years of repressed consumerism. 

Secondly, the wars presented those of us here in North America with a very different psychological landscape. We went “over there” to fight and then “came home” when it was done. The war wasn’t on our front stoop. That gave us both physical and emotional distance after the war was over.

Finally, when the war was over, it was over. The world had to adjust to a new normal, but the fighting had stopped. That gave consumers a clear mental threshold to step beyond. You didn’t have to worry that you might be called back into service on any day, returning once again to the grim reality that was.

For these three reasons, I think our consumer mentality may look significantly different in the coming months. As we struggle back to whatever normal is between now and the discovery of a vaccine – currently estimated at 12 to 18 months – we will have a significantly different consumer reality. We don’t have years of pent up consumer demand that will wash away any pragmatic thoughts of restraint. We have been dealing with a crisis that has crept into our very homes. It has been present in every community, every neighborhood. And – most importantly – we will be living in a constant state of anxiety and fear for the foreseeable future. These three things are going to have a dramatic impact on our desire to consume.

Blogger Tomas Pueyo did a good job of outlining what our new normal may look like in his post The Hammer and The Dance. We are still very much in the “Hammer” phase but we are beginning to wonder what the “Dance” may look like.

In our immediate future, we are going to hear a lot about the Basic reproductive rate – denoted as R0  – or R naught. This is the measure of the number of cases, on average, an infected person will cause during their infectious period. A highly infectious disease, like measles, has a R naught of between 12 and 18. Current estimates on COVID-19 put its R naught between 1.5 and 3.5. Most models assume a R naught of 2.4 or so.

This is important to understand if we want to understand what our habits of consumption might look like until a vaccine is found. As long as that R naught number is higher than 1, the disease continues to spread. If we can get it lower than 1, then the numbers stabilize and eventually decline. The “Dance” Pueyo refers to is the actions that need to be taken to keep the R naught number lower than 1 without completely stalling the economy. With extremely restrictive measures you theoretically could reduce the R naught to zero but in the process, you shut down the entire economy. Relax the restrictions too much and the R naught climbs back up into exponentially increasing territory.

Much of the commentary I’m reading is assuming we will go back to “normal” or some variation of it. But the new “normal” is this dance, where we will be balanced on the knife’s edge between human cost and economic cost. For the next several months we will be teetering from one side to the other. At best, we can forget about widespread travel, large public gatherings and sociability as we previously knew it. At worst, we go back into full lockdown.

This is the psychological foundation our consumption will be based on. We will be in a constant state of anxiety and fear. And our marketing strategies will have to address that. Further, marketing needs to factor in this new normal in an environment where brand messaging is no longer a unilateral exercise. It is amplified and bent through social media. Frayed nerves make for a very precarious arena in which to play a game we’re still learning the rules of. We can expect ideological and ethical divisions to widen and deepen during the Dance.

The duration of the Dance will be another important factor to consider when we think about marketing. If it goes long enough, our temporary behavioral shifts become habits. The changes that are forced upon us may become permanent. Will we ever feel good about stepping aboard a cruise ship? Will we be ever be comfortable again in a crowded restaurant or bar? Will we pay 300 dollars to be jammed into a stadium with 50 thousand other people? We don’t know the answers to these questions yet.

Successful market depends on being able to anticipate the mood of the audience. The Dance will make that much more difficult. We will be racing up and down Maslow’s Hierarchy of Needs at a frenzied pace that would make a game of Snakes and Ladders seem mild by comparison.

That’s what happens in times of crisis. In normal times we spend our lifetimes scaling Abraham Maslow’s elegant model from the bottom – fulfilling our most basic physical needs – to the top – an altruistic concern for the greater good. Most of us spend much of our time stuck somewhere north of the middle, obsessed with our own status and shoring up our self-esteem. It’s this relatively stable mental state that the vast majority of marketing is targeted at.

But in a chronic crisis mode like that which is foretold by the Dance, we can crash from the top to the bottom of the hierarchy in the blink of an eye. And we will all be doing this at different times in different locations.

The Dance introduces a consumer scenario marketers have never encountered before. We will crave comfort and security. We will be desperate for glimpses of positivity. And it’s certain that our values and beliefs will shift but it’s difficult to predict in which direction. While my hope is that we become kinder and gentler people, I suspect it will be a toss-up. It could well go the other way.

If you thought marketing was tough before, buckle up!

Bubbles, Bozos and the Mediocrity Sandwich

I spent most of my professional life inside the high-tech bubble. Having now survived the better part of a decade outside said bubble, I have achieved enough distance to be able to appreciate the lampooning skills of Dan Lyons. If that name doesn’t sound familiar, you may have seen his work. He was the real person behind the Fake Steve Jobs blog. He was also the senior technology editor for Forbes and Newsweek prior to being cut loose in the print media implosion. He later joined the writing staff of Mike Judge’s brilliant HBO series Silicon Valley.

Somewhere in that career arc, Lyons briefly worked at a high tech start up.  From that experience, he wrote Disrupted: My Misadventure in the Start Up Bubble.” It gives new meaning to the phrase “painfully funny.”

After being cast adrift by Forbes, Lyons decided to change his perspective on the Bubble from “outside looking in” to “inside looking out.” He wanted to jump on the bubble band wagon, grab a fistful of options and cash in. And so he joined HubSpot as a content producer for their corporate blog. The story unfolds from there.

One particularly sharp and insightful chapter of the book recalls Steve Job’s “Bozo Explosion”:

“Apple CEO Steve Jobs used to talk about a phenomenon called a ‘bozo explosion,’ by which a company’s mediocre early hires rise up through the ranks and end up running departments. The bozos now must hire other people, and of course they prefer to hire bozos. As Guy Kawasaki, who worked with Jobs at Apple, puts it: ‘B players hire C players, so they can feel superior to them, and C players hire D players.’ “

The Bozo Explosion is somewhat unique to tech start-ups, mainly because of some of the aspects of the culture I talked about in a previous column. But I ran into my own version back in my consulting career. And I ran into it in all kinds of companies. I used to call it the Mediocrity Sandwich.

The Mediocrity Sandwich lives in middle management. I used to find that the people at the C Level of the company were usually pretty smart and competent (that said, I did run across some notable exceptions in my time). I also found that the people found on the customer facing front lines of the company were also pretty smart and – more importantly – very aware of the company’s own issues.

But addressing those issues invariably caused a problem. You have senior executives who were certainly capable of fixing the problems, whatever they might be. And you had front line employees who were painfully aware of what the problems were and motivated to implement solutions. But all the momentum of any real problem-solving initiative used to get sucked out somewhere in the middle of the corporate org chart. The problem was the Mediocrity Sandwich.

You see, I don’t think the Bozo Explosion is so much a pyramid – skinny at the top, broad at the bottom – as it is an inverted U-Shaped curve. I think “bozoism” tends to peak in the middle. You certainly have the progression from A’s to B’s to C’s as you move down from the top executive rungs. But then you have the inverse happening as you move from Middle Management to the front lines. The problem is the attrition of competence as you became absorbed into the organization. It’s the Bozo Explosion in reverse.

I usually found there was enough breathing room for competence to survive at the entry level in the organization. There were enough degrees of separation between the front line and the from the bozos in middle management. But as you started to climb the corporate ladder, you kept getting closer to the bozos. Your degree of job frustration began to climb as they had more influence over your day-to-day work. Truly competent players bailed and moved on to a less bozo-infested environment. Those that remained either were born bozos or had “bozo”ness thrust upon them. Either way, as you climbed towards middle management, the bozo factor climbed in lock step. The result? A bell curve of bozos centered in the middle between the C-Level and the front lines.

This creates a poisonous outlook for the long-term prospects of a company. Eventually, the C level executive will age out of their jobs. But who will replace them? The internal farm team is a bunch of bozos. You can recruit from outside, but then the incoming talent inherits a Mediocrity Sandwich. The company begins to rot from within.

For companies to truly change, you have to root out the bozo-rot, but this is easier said than done. If there is one single thing that bozos are good at, it is bozo butt-covering.

What Happens When A Black Swan Beats Up Your Brand

I’m guessing the word Corona brings many things to your mind right now — and a glass full of a ice-cold beer may not be one of them. A brand that once made us think of warm, sunny beaches and Mexican vacations on the Mayan Riviera now is mentally linked to a global health crisis. Sometimes the branding gods smile on you in their serendipity, and sometimes they piss in your cornflakes. For Grupo Modelo, the makers of Corona beer, the latter is most definitely the case.

As MediaPost Editor Joe Mandese highlighted in a post last week, almost 40% of American beer drinkers in a recent poll would not buy Corona under any circumstances. Fifteen percent of regular Corona drinkers would no longer order it in public. No matter how you slice those numbers, that does not bode well for the U.S.’s top-selling imported drink.

It remains to be seen what effect the emerging pandemic will have on the almost 100-year-old brand. Obviously, Grupo Modelo, the owners of the brand, are refuting that there is any permanent damage. But then, what else would you expect them to say?  There’s a lot of beer sitting on shelves around the world that is waiting to be drunk. It’s just unfortunate it has the same name as a health crisis that so far is the biggest story of this decade.

This is probably not what the marketing spin doctors at Grupo Modelo want to hear, but a similar thing happened about 40 years ago.  Here is the story of another brand whose name got linked to the biggest health tragedy of the 1980s.

In 1946 the Carlay Company of Chicago registered a trademark for a “reducing plan vitamin and mineral candy” that had been in commercial use for almost a decade. The company claimed that users of the new “vitamin” could “lose up to 10 pounds in 5 days, without dieting or exercising.” The Federal Trade Commission soon called bullshit on that claim, causing the Carlay Company to strip it from its marketing in 1944.

Marketing being marketing, it wasn’t the vitamins in this “vitamin” that allegedly caused the pounds to melt away. In the beginning, it was something that chemists call benzocaine. That’s a topical anesthetic you’ll also find it in over-the-counter products like Orajel. Basically, benzocaine numbed the tongue. The theory was that a tongue that couldn’t taste anything would be less likely to crave food.

The active ingredient was later changed to phenylpropanolamine, which was also used as a decongestant in cold medications and to control urinary incontinence in dogs. In the ‘60s and ’70s, it became a common ingredient in many diet pills. Then it was discovered to cause strokes in young women.

The Carlay Company eventually became part of the Campana Corporation, which in turn was sold to Purex. The product morphed from a vitamin to a diet candy and was sold in multiple flavors, including chocolate, chocolate mint, butterscotch and caramel. If you remember Kraft caramels — little brown cubes packaged in clear cellophane — you have a good idea what these diet candies looked like.

Despite the shaky claims and dubious ingredients, the diet candies became quite popular. I remember my mother, who had a lifelong struggle with her weight, usually had a box of them in the cupboard when I was growing up. Sale hit their peak in the ‘70s and early ‘80s. There were TV ads and celebrity endorsers — including Bob Hope and Tyrone Power — lined up to hawk them.

Then, in 1981, the Centers for Disease Control and Prevention (CDC) published a report about five previously healthy men who all became infected with pneumocystis pneumonia. The odd thing was that this type of pneumonia is almost never found in healthy people. There was another odd thing. All five men were gay. In 1982, the CDC gave a name to this new disease: AIDS.

Of all the ways AIDS changed our world in the 1980s, one was particularly relevant to the marketers of those diet candies, which just happened to be named Ayds.

You can see the problem.

Ayds soldiered on until 1988, despite sales that dropped 50%. The company tried to find a new name, including Diet Ayds and Aydslim in the U.K. It was too little, too late. The candies were eventually withdrawn from the market.

Does this foretell the fate of Corona beer? Perhaps not. AIDS has been part of our public consciousness for four decades. A product with a similar sounding name didn’t stand a chance. We can hope that coronavirus will not have the same longevity. And the official name of the outbreak has now been changed to Covid19. For both these reasons, Corona — the beer — might be able to ride out the storm caused by corona, the virus.

But you can bet that there are some pretty uncomfortable meetings being held right now in the marketing department boardroom at Grupo Modelo.

What is the Moral Responsibility of a Platform?

The owner of the AirBnB home in Orinda, California suspected something was up. The woman who wanted to rent the house for Halloween night swore it wasn’t for a party. She said it was for a family reunion that had to relocate at the last minute because of the wildfire smoke coming from the Kincade fire, 85 miles north of Orinda. The owners reluctantly agreed to rent the home for one night.

Shortly after 9 pm, the neighbors called the owner, complaining of a party raging next door. The owners verified this through their doorbell camera. The police were sent. Over a 100 people who had responded to a post on social media were packed into the million-dollar home. At 10:45 pm, with no warning, things turned deadly. Gunshots were fired. Four men in their twenties were killed immediately. A 19-year-old female died the next day. Several others were injured.

Here is my question. Is AirBnB partly to blame for this?

This is a prickly question. And it’s one to extends to any one of the platforms that are highly disruptive. Technical disruption is a race against our need for order and predictability. When the status quo is upended, there is a progression towards a new civility that takes time, but technology is outstripping it. Platforms create new opportunities – for the best of us and the worst.

The simple fact is that technology always unleashes ethical ramifications – the more disruptive the technology, the more serious the ethical considerations. The other tricky bit is that some ethical considerations can be foreseen..but others cannot.

I have often said that our world is becoming a more complex place. Technology is multiplying this complexity at an ever increasing pace. And the more complex things are, the more difficult they are to predict.

As Homo Deus author Yuval Noah Harari said, because of the pace of technology, our world is becoming more complex, so it is becoming increasingly difficult to predict what the future might hold.

“Today our knowledge is increasing at breakneck speed, and theoretically we should understand the world better and better. But the very opposite is happening. Our new-found knowledge leads to faster economic, social and political changes; in an attempt to understand what is happening, we accelerate the accumulation of knowledge, which leads only to faster and greater upheavals. Consequently, we are less and less able to make sense of the present or forecast the future.”

This acceleration is also eliminating the gap between cause and consequence. We used to have the luxury of time to digest disruption. But now, the gap between the introduction of the technology and the ripples of the ramifications is shrinking.

Think about the ethical dilemmas and social implications introduced by the invention of the printing press. Thanks to the introduction of this technology, literacy started creeping down through social classes and it totally disrupted entire established hierarchies, unleashed ideological revolutions and ushered in tsunamis of social change. But the cause and consequences were separated by decades and even centuries. Should Guttenberg be held responsible for the French Revolution? This seems laughable, but only because almost three and a half centuries lie between the two.

Like the printing press eventually proved, technology typically dismantles vertical hierarchies. It democratizes capabilities – spreading them down to new users and – in the process – making the previously impossible possible. I have always said that technology is simply a tool, albeit an often disruptive one. It doesn’t change human behaviors. It enables them. But here we have an interesting phenomenon. If technology pushes capabilities down to more people and simultaneously frees those users from the restraint of a verticalized governing structure, you have a highly disruptive sociological experiment happening in real time with a vast sample of subjects.

Most things about human nature are governed by a normal distribution curve – also known as a bell curve. Behaviors expressed through new technologies are no exception. When you rapidly expand access to a capability you are going to have a spectrum of ethical attitudes interacting with it. At one end of the spectrum, you will have bad actors. You will find these actors on both sides of a market expanding at roughly the same rate as our universe. And those actors will do awful things with the technology.

Our innate sense of fairness seeks a simple line between cause and effect. If shootings happen at an AirBnB party house, then AirBnB should be held at least partly responsible. Right?

I’m not so sure. That’s the simple answer, but after giving it much thought, I don’t believe it’s the right one.  Like my previous example of the printing press, I think trying to saddle a new technology with the unintentional and unforseen social disruption unleashed by that technology is overly myopic. It’s an attitude that will halt technological progress in its tracks.

I fervently believe new technologies should be designed with humanitarian principles in mind. They should elevate humans, strive for neutrality, be impartial and foster independence. In the real world, they should do all this in a framework that allows for profitability. It is this, and only this, that is reasonable to ask from any new technology. To try to ask it to foresee every potential negative outcome or to retroactively hold it accountable when those outcomes do eventually occur is both unreasonable and unrealistic.

Disruptive technologies will always find the loopholes in our social fabric. They will make us aware of the vulnerabilities in our legislation and governance. If there is an answer to be found here, it is to be found in ourselves. We need to take accountability for the consequences of the technologies we adopt. We need to vote for governments that are committed to keeping pace with disruption through timely and effective governance.

Like it or not, the technology we have created and adopted has propelled us into a new era of complexity and unpredictability. We are flying into uncharted territory by the seat of our pants here. And before we rush to point fingers we should remember – we’re the ones that asked for it.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

Why Good Tech Companies Keep Being Evil

You’d think we’d have learned by now. But somehow it still comes as a shock to us when tech companies are exposed as having no moral compass.

Slate recently released what it called the “Evil List”  of 30 tech companies compiled through a ballot sent out to journalists, scholars, analysts, advocates and others. Slate asked them which companies were doing business in the way that troubled them most. Spoiler alert: Amazon, Facebook and Google topped the list.  But they weren’t alone. Rounding out the top 10, the list of culprits included Twitter, Apple, Microsoft and Uber.

Which begs the question: Are tech companies inherently evil — like, say a Monsanto or Phillip Morris — or is there something about tech that positively correlates with “evilness”?

I suspect it’s the second of these.  I don’t believe Silicon Valley is full of fundamentally evil geniuses, but doing business as usual at a successful tech firm means there will be a number of elemental aspects of the culture that take a company down the path to being evil.

Cultism, Loyalism and Self-Selection Bias

A successful tech company is a belief-driven meat grinder that sucks in raw, naïve talent on one end and spits out exhausted and disillusioned husks on the other. To survive in between, you’d better get with the program.

The HR dynamics of a tech startup have been called a meritocracy, where intellectual prowess is the only currency.

But that’s not quite right. Yes, you have to be smart, but it’s more important that you’re loyal. Despite their brilliance, heretics are weeded out and summarily turfed, optionless in more ways than one. A rigidly molded group-think mindset takes over the recruitment process, leading to an intellectually homogeneous monolith.

To be fair, high growth startups need this type of mental cohesion. As blogger Paras Chopra said in a post entitled “Why startups need to be cult-like, “The reason startups should aim to be like cults is because communication is impossible between people with different values.” You can’t go from zero to 100 without this sharing of values.

But necessary or not, this doesn’t change the fact that your average tech star up is a cult, with all the same ideological underpinnings. And the more cult-like a culture, the less likely it is that it will take the time for a little ethical navel-gazing.

A Different Definition of Problem Solving

When all you have is a hammer, everything looks like a nail. And for the engineer, the hammer that fixes everything is technology. But, as academic researchers Emanuel Moss and Jacob Metcalf discovered, this brand of technical solutionism can lead to a corporate environment where ethical problems are ignored because they are open-ended, intractable questions. In a previous column I referred to them as “wicked problems.”

As Moss and Metcalf found, “Organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes ‘intractable, like it’s too big of a problem to tackle.’”

If you take this to the extreme, you get the Cambridge Analytica example, where programmer Christopher Wylie was so focused on the technical aspects of the platform he was building that he lost sight of the ethical monster he was unleashing.

A Question of Leadership

Of course, every cult needs a charismatic leader, and this is abundantly true for tech-based companies. Hubris is a commodity not in short supply among the C-level execs of tech.

It’s not that they’re assholes (well, ethical assholes anyway). It’s just that they’re, umm, highly focused and instantly dismissive of any viewpoint that’s not the same as their own. It’s the same issue I mentioned before about the pitfalls of expertise — but on steroids.

I suspect that if you did an ethical inventory of Mark Zuckerberg, Jeff Bezos, Larry Page, Sergey Brin, Travis Kalanik, Reid Hoffman and the rest, you’d find that — on the whole — they’re not horrible people. It’s just that they have a very specific definition of ethics as it pertains to their company. Anything that falls outside those narrowly defined boundaries is either dismissed or “handled” so it doesn’t get in the way of the corporate mission.

Speaking of corporate missions, leaders and their acolytes often are unaware — often intentionally — of the nuances of unintended consequences. Most tech companies develop platforms that allow disruptive new market-based ecosystems to evolve on their technological foundations. Disruption always unleashes unintended social consequences. When these inevitably happen, tech companies generally handle them one of three ways:

  1. Ignore them, and if that fails…
  2. Deny responsibility, and if that fails…
  3. Briefly apologize, do nothing, and then return to Step 1.

There is a weird type of idol worship in tech. The person atop the org chart is more than an executive. They are corporate gods — and those that dare to be disagreeable are quickly weeded out as heretics. This helps explain why Facebook can be pilloried for attacks on personal privacy and questionable design ethics, yet Mark Zuckerberg still snags a 92% CEO approval rating on Glassdoor.com.These fundamental characteristics help explain why tech companies seem to consistently stumble over to the dark side. But there’s an elephant in the room we haven’t talked about. Almost without exception, tech business models encourage evil behavior. Let’s hold that thought for a future discussion.