Do We Still Need Cities?

In 2011, Harvard economist Edward Glaeser called the city “man’s greatest invention” in his book “Triumph of the City,” noting that “there is a near-perfect correlation between urbanization and prosperity across nations.”

Why is this so? It’s because historically we needed a critical mass of connection in order to accelerate human achievement.  Cities bring large numbers of people into closer, more frequent and productive contact than other places.  This direct, face-to-face contact is critical for facilitating the exchange of knowledge and ideas that lead to the next new venture business, medical discovery or social innovation.

This has been true throughout our history. While cities can be messy and crowded, they also spin off an amazing amount of ingenuity and creativity, driving us all forward.

But the very same things that make cities hot beds of productive activity also make them a human petri dish in the midst of a pandemic.

Example: New York

If the advantages that Glaeser lists are true for cities in general, it’s doubly true for New York, which just might be the greatest city in the world. Manhattan’s population density is 66,940 people per square mile, which makes it the highest of any area in the U.S. It’s also diverse, with 36% of population foreign-born. It attracts talent in all types of fields from around the world.

Unfortunately, all these things also set New York up to be particularly hard hit by COVID-19. To date, according to Google’s tracker, it has 236,000 confirmed cases of COVID-19 and a mortality rate of 10%. That case rate would put it ahead of all but 18 countries in the world. What has made New York great has also made it tragically vulnerable to a pandemic.

New York is famous for its gritty resilience. But at least one New Yorker thinks this might be the last straw for the Big Apple. In an essay entitled “New York City is dead forever,” self-published and then reprinted by the New York Post, comedy club owner James Altucher talks about how everyone he knows is high-tailing it out of town for safer, less crowded destinations, leaving a ghost town in their wake.

He doesn’t believe they’re coming back. The connections that once relied on physical proximity can now be replicated by technology. Not perfectly, perhaps, but well enough. Certainly, well enough to tip the balance away from the compromises you have to be prepared to swallow to live in a city like New York: higher costs of living, exorbitant real estate, higher crime rates and the other grittier, less-glittery sides of living in a crowded, dense metropolis.


Example: Silicon Valley

So, perhaps tech is partly (or largely) to blame for the disruption to the interconnectedness of cities. But, ironically, thanks to COVID-19, the same thing is happening to the birthplace of tech, Silicon Valley and the Bay area of Northern California.

Barb is a friend of mine who was born in Canada but has lived much of her life in Palo Alto, California — a stone’s throw from the campus of Stanford University. She recently beat a temporary retreat back to her home and native land north of the 49th Parallel.  When Barb explained to her Palo Alto friends and neighbors why Canada seemed to be a safer place right now, she explained it like this,

“My county — Santa Clara — with a population of less than 2 million people, has had almost as many COVID cases in the last three weeks as the entire country of Canada.”

She’s been spending her time visiting her Canadian-based son and exploring the natural nooks and crannies of British Columbia while doing some birdwatching along the way.  COVID-19 is just one of the factors that has caused her to start seriously thinking about life choices she couldn’t have imagined just a few short years ago. As Barb said to me as we chatted, “I have a flight home booked — but as it gets closer to that date, it’s becoming harder and harder to think about going back.”  

These are just two examples of the reordering of what will become the new normal. Many of us have retreated in search of a little social distance from what our lives were. Increasingly, we are relying on tech to bridge the distances that we are imposing between ourselves and others. Breathing room — in its most literal sense — has become our most immediate priority.

This won’t change anytime soon. We can expect this move to continue for at least the next year. It could be — and I suspect it will be — much longer. Perhaps James Altucher is right. Could this pandemic – aided and abetted by tech – finally be what kills mankind’s greatest invention? As he writes in his essay,

“Everyone has choices now. You can live in the music capital of Nashville, you can live in the ‘next Silicon Valley’ of Austin. You can live in your hometown in the middle of wherever. And you can be just as productive, make the same salary, have higher quality of life with a cheaper cost.”

If Altucher is right, there’s another thing we need to think about. According to Glaeser, cities are not only great for driving forward innovation. They also put some much-needed distance between us and nature:

“We human are a destructive species. We tend to destroy stuff when we’re around it. And if you love nature, stay away from it.”

As we look to escape one crisis, we might be diving headlong into the next.

What Would Aaron Do?

I am a big Aaron Sorkin fan. And before you rain on my parade, I say that fully understanding that he epitomizes the liberal intellectual elitist, sanctimonious cabal that has helped cleave American culture in two. I get that. And I don’t care.

I get that his message is from the left side of the ideological divide. I get that he is preaching to the choir. And I get that I am part of the choir. Still, given the times, I felt that a little Sorkin sermon was just what I needed. So I started rewatching Sorkin’s HBO series “The Newsroom.”

If you aren’t part of this particular choir, let me bring you up to speed. The Newsroom in this case is at the fictional cable network ACN. One of the primary characters is lead anchor Will McEvoy (played by Jeff Daniels), who has built his audience by being noncontroversial and affable — the Jay Leno of journalism. 

This brings us to the entrance of the second main character: Mackenzie McHale, played by Emily Mortimer. Exhausted from years as an embedded journalist covering multiple conflicts in Afghanistan, Pakistan and Iraq, she comes on board as McEvoy’s new executive producer (and also happens to be his ex-girlfriend). 

In typical Sorkin fashion, she goads everyone to do better. She wants to reimagine the news by “reclaiming journalism as an honorable profession,” with “civility, respect, and a return to what’s important; the death of bitchiness; the death of gossip and voyeurism; speaking truth to stupid.”

I made it to episode 3 before becoming profoundly sad and world-weary. Sorkin’s sermon from 2012—– just eight years ago —  did not age well. It certainly didn’t foreshadow what was to come. 

Instead of trying to be better, the news business — especially cable news — has gone in exactly the opposite direction, heading straight for Aaron Sorkin’s worst-case scenario. This scenario formed part of a Will McEvoy speech in that third episode: “I’m a leader in an industry that miscalled election results, hyped up terror scares, ginned up controversy, and failed to report on tectonic shifts in our country — from the collapse of the financial system to the truths about how strong we are to the dangers we actually face.”

That pretty much sums up where we are. But even Sorkin couldn’t anticipate what horrors social media would throw into the mix. The reality is actually worse than his worst-case scenario. 

Sorkin’s appeal for me was that he always showed what “better” could be. That was certainly true in his breakthrough political hit “The West Wing.” 

He brought the same message to the jaded world of journalism in “The Newsroom. He was saying, “Yes, we are flawed people working in a flawed system set in a flawed nation. But it can be better….Our future is in our hands. And whatever that future may be, we will be held accountable for it when it happens.”

This message is not new. It was the blood and bones of Abraham Lincoln’s annual address to Congress on December 1, 1862, just one month before the Emancipation Proclamation was signed into law. Lincoln was preparing the nation for the choice of a path which may have been unprecedented and unimaginably difficult, but would ultimately be proven to be the more moral one: “It is not ‘can any of us imagine better?’ but, ‘can we all do better?’ The dogmas of the quiet past, are inadequate to the stormy present. The occasion is piled high with difficulty, and we must rise — with the occasion.”

“The Newsroom” was Sorkin’s last involvement with a continuing TV series. He was working on his directorial movie debut, “Molly’s World,” when Trump got elected. 

Since then, he has adapted Harper Lee’s “To Kill a Mockingbird” for Broadway, with “The Newsroom’”s Jeff Daniels as Atticus Finch. 

Sorkin being Sorkin, he ran into a legal dispute with Lee’s estate when he updated the source material to be a little more open about the racial tension that underlies the story. Aaron Sorkin is not one to let sleeping dogmas lie. 

Aaron Sorkin also wrote a letter to his daughter and wife on the day after the 2016 election, a letter than perhaps says it all. 

It began, “Well the world changed late last night in a way I couldn’t protect us from.”

He was saying that as a husband and father. But I think it was a message for us all — a message of frustration and sadness. He closed the letter by saying “I will not hand [my daughter] a country shaped by hateful and stupid men. Your tears last night woke me up, and I’ll never go to sleep on you again.”

Yes, Sorkin was preaching when he was scripting “The Newsroom.” But he was right. We should do better. 

In that spirit, I’ll continue to dissect the Reuters study on the current state of journalism I mentioned last week. And I’ll do this because I think we have to hold our information sources to “doing better.” We have to do a better job of supporting those journalists that are doing better. We have to be willing to reject the “dogmas of the quiet past.” 

One of those dogmas is news supported by advertising. The two are mutually incompatible. Ad-supported journalism is a popularity contest, with the end product a huge audience custom sliced, diced and delivered to advertisers — instead of a well-informed populace.

We have to do better than that.

The Potential Woes of Working from Home

Many of you have now had a few months under your belt working virtually from home rather than going to the office. At least some of you are probably considering continuing to do so even after COVID recedes and the all clear is given to return to normal. A virtual workplace makes all kinds of rational sense – both for employees and employers. But there are irrational reasons why you might want to think twice before you fully embrace going virtual.

About a decade ago, my company also went with a hybrid virtual/physical workplace. As the CEO, there was a lot I liked about it. It was a lot more economical than leasing more office space. It gave us the flexibility to recruit top talent in areas where we had no physical presence. And it seemed that technology was up to the task of providing the communication and work-flow tools we needed to support our virtual members.

On the whole, our virtual employees also seemed to like it. It gave them more flexibility in their workday. It also made it less formal. If you wanted to work in pajamas and bunny slippers, so be it. And with a customer base spread across many time zones, it also made it easier to shift client calls to times that were mutually acceptable.

It seemed to be a win-win. For awhile. Then we noticed that all was not wonderful in work-from-home land.

I can’t say productivity declined. We were always a results-based workplace so as long as the work got done, we were happy. But we started to feel a shift in our previously strong corporate culture. We found team-member complaints about seemingly minor things skyrocket. We found less cohesion across teams. Finally – and most critically – it started to impact our relationships with our customers.

Right about the time all this was happening, we were acquired by a much bigger company. One of the dictates that was handed down from the new owners was that we establish physical offices and bring our virtual employees back to the mothership for the majority of their work-week. At the time, I wasn’t fully aware of the negative consequences of going virtual so I initially fought the decision. But to be honest, I was secretly happy. I knew something wasn’t quite right. I just wasn’t sure what it was. I suspected it might have been our new virtual team members.

The move back to a physical workplace was a tough one. Our virtual team members were very vocal about how this was a loss of their personal freedom. New HR fires were erupting daily and I spent much of my time fighting them. This, combined with the inevitable cultural consequences of being acquired, often made me shake my head in bewilderment. Life in our company was turning into a shit-show.

I wish I could say that after we all returned to the same workplace, we joined hands and sang a rousing chorus of Kumbaya. We didn’t. The damage had been done. Many of the disgruntled former virtual team members ended up moving on. The cultural core of the company remained with our original team members who had worked in the same office location for several years. I eventually completed my contract and went my own way.

I never fully determined what the culprit was. Was it our virtual team members? Or was it the fact that we embraced a virtual workplace without considering unintended consequences. I suspected it was a little of both.

Like I said, that was a decade ago. From a rational perspective, all the benefits of a virtual workplace seem even more enticing than they did then. But in the last 10 years, there has been research done on those irrational factors that can lead to the cracks in a corporate culture that we experienced.

Mahdi Roghanizad is an organizational behavior specialist from Ryerson University in Toronto. He has long looked at the limitations of computerized communication. And his research provides a little more clarity into our failed experiment with a virtual workplace.

Roghanizad has found that without real-life contact, the parts of our brain that provide us with the connections needed to build trust never turn on. In order to build a true relationship with another person, we need something called the Theory of Mind. According to Wikipedia, “Theory of mind is necessary to understand that others have beliefs, desires, intentions, and perspectives that are different from one’s own.”

But unless we’re physically face-to-face with another person, our brain doesn’t engage in this critical activity. “Eye contact is required to activate that theory of mind and when the eye contact is not there, the whole other signal information is not processed by our brain,” said Roghanizad. Even wearing a pair of sunglasses is enough to short circuit the process. Relegating contact to a periodic Zoom call guarantees that this empathetic part of our brains will never kick in.

But it’s not just being eye-ball to eye-ball. There are other non-verbal cues we rely on to connect with other people and create a Theory of Mind. Other research has shown the importance of pheromones and physical gestures like crossing your arms and leaning forward or back. This is why we subconsciously start to physically imitate people we’re talking to. The stronger the connection with someone, the more we imitate them.

This all comes back to the importance of bandwidth in the real world. A digital connection cannot possibly incorporate all the nuance of a face-to-face connection. And whether we realize it or not, we rely on that bandwidth to understand other people. From that understanding comes the foundations of trusted relationships. And trusted relationships are the difference between a high-functioning work team and a dysfunctional one.

I wish I knew that ten years ago.

Is the Marketing Industry Prepared for What Lies Ahead?

It was predictable. Humans are starting to do what humans do. We are beginning to shift gears, working our way through the stages of shock. We are daring to look beyond today and wondering what tomorrow might be like. Very smart people, like Sapiens author Yuval Noah Harari, are concerned about what we may trade away in the teeth of this crisis.

Others, like philosopher Barbara Muraca, climate change advocate Greta Thurnberg and Media Spin’s own Kaila Colbin,  are hoping that this might represent a global reset moment for us. Perhaps this will finally break our obsession with continual year after year growth, fueled by our urges to acquire and consume. We in the advertising and marketing business kept dumping gas on this unsustainable dumpster fire. There is hope by some – including myself – that COVID-19 will be a kind of shock therapy, convincing us to take a kinder, gentler approach to both the planet and each other.

My own crystal ball gazing is on a much-reduced scale. Specifically, I’m wondering what advertising and marketing might be like in our immediate future. I started by looking back at what history can teach us about recovery from a crisis.

Both World Wars resulted in explosions of consumerism. One could probably make the argument that the consumerism that happened after World War II has continued pretty much uninterrupted right to the current day. We basically spent our way out of the dotcom implosion of 1999 – 2002 and the Great Recession of 2007 – 2009.

But will this be different? I think it will, for three reasons.

One, both World Wars repressed consumer demand for a matter of years. With World War I it was four years, plus another 3 marked by the Spanish Flu Pandemic and a brief but sharp recession as the economy had to shift gears from wartime to peace time. With World War II, it was 6 years of repressed consumerism. 

Secondly, the wars presented those of us here in North America with a very different psychological landscape. We went “over there” to fight and then “came home” when it was done. The war wasn’t on our front stoop. That gave us both physical and emotional distance after the war was over.

Finally, when the war was over, it was over. The world had to adjust to a new normal, but the fighting had stopped. That gave consumers a clear mental threshold to step beyond. You didn’t have to worry that you might be called back into service on any day, returning once again to the grim reality that was.

For these three reasons, I think our consumer mentality may look significantly different in the coming months. As we struggle back to whatever normal is between now and the discovery of a vaccine – currently estimated at 12 to 18 months – we will have a significantly different consumer reality. We don’t have years of pent up consumer demand that will wash away any pragmatic thoughts of restraint. We have been dealing with a crisis that has crept into our very homes. It has been present in every community, every neighborhood. And – most importantly – we will be living in a constant state of anxiety and fear for the foreseeable future. These three things are going to have a dramatic impact on our desire to consume.

Blogger Tomas Pueyo did a good job of outlining what our new normal may look like in his post The Hammer and The Dance. We are still very much in the “Hammer” phase but we are beginning to wonder what the “Dance” may look like.

In our immediate future, we are going to hear a lot about the Basic reproductive rate – denoted as R0  – or R naught. This is the measure of the number of cases, on average, an infected person will cause during their infectious period. A highly infectious disease, like measles, has a R naught of between 12 and 18. Current estimates on COVID-19 put its R naught between 1.5 and 3.5. Most models assume a R naught of 2.4 or so.

This is important to understand if we want to understand what our habits of consumption might look like until a vaccine is found. As long as that R naught number is higher than 1, the disease continues to spread. If we can get it lower than 1, then the numbers stabilize and eventually decline. The “Dance” Pueyo refers to is the actions that need to be taken to keep the R naught number lower than 1 without completely stalling the economy. With extremely restrictive measures you theoretically could reduce the R naught to zero but in the process, you shut down the entire economy. Relax the restrictions too much and the R naught climbs back up into exponentially increasing territory.

Much of the commentary I’m reading is assuming we will go back to “normal” or some variation of it. But the new “normal” is this dance, where we will be balanced on the knife’s edge between human cost and economic cost. For the next several months we will be teetering from one side to the other. At best, we can forget about widespread travel, large public gatherings and sociability as we previously knew it. At worst, we go back into full lockdown.

This is the psychological foundation our consumption will be based on. We will be in a constant state of anxiety and fear. And our marketing strategies will have to address that. Further, marketing needs to factor in this new normal in an environment where brand messaging is no longer a unilateral exercise. It is amplified and bent through social media. Frayed nerves make for a very precarious arena in which to play a game we’re still learning the rules of. We can expect ideological and ethical divisions to widen and deepen during the Dance.

The duration of the Dance will be another important factor to consider when we think about marketing. If it goes long enough, our temporary behavioral shifts become habits. The changes that are forced upon us may become permanent. Will we ever feel good about stepping aboard a cruise ship? Will we be ever be comfortable again in a crowded restaurant or bar? Will we pay 300 dollars to be jammed into a stadium with 50 thousand other people? We don’t know the answers to these questions yet.

Successful market depends on being able to anticipate the mood of the audience. The Dance will make that much more difficult. We will be racing up and down Maslow’s Hierarchy of Needs at a frenzied pace that would make a game of Snakes and Ladders seem mild by comparison.

That’s what happens in times of crisis. In normal times we spend our lifetimes scaling Abraham Maslow’s elegant model from the bottom – fulfilling our most basic physical needs – to the top – an altruistic concern for the greater good. Most of us spend much of our time stuck somewhere north of the middle, obsessed with our own status and shoring up our self-esteem. It’s this relatively stable mental state that the vast majority of marketing is targeted at.

But in a chronic crisis mode like that which is foretold by the Dance, we can crash from the top to the bottom of the hierarchy in the blink of an eye. And we will all be doing this at different times in different locations.

The Dance introduces a consumer scenario marketers have never encountered before. We will crave comfort and security. We will be desperate for glimpses of positivity. And it’s certain that our values and beliefs will shift but it’s difficult to predict in which direction. While my hope is that we become kinder and gentler people, I suspect it will be a toss-up. It could well go the other way.

If you thought marketing was tough before, buckle up!

Bubbles, Bozos and the Mediocrity Sandwich

I spent most of my professional life inside the high-tech bubble. Having now survived the better part of a decade outside said bubble, I have achieved enough distance to be able to appreciate the lampooning skills of Dan Lyons. If that name doesn’t sound familiar, you may have seen his work. He was the real person behind the Fake Steve Jobs blog. He was also the senior technology editor for Forbes and Newsweek prior to being cut loose in the print media implosion. He later joined the writing staff of Mike Judge’s brilliant HBO series Silicon Valley.

Somewhere in that career arc, Lyons briefly worked at a high tech start up.  From that experience, he wrote Disrupted: My Misadventure in the Start Up Bubble.” It gives new meaning to the phrase “painfully funny.”

After being cast adrift by Forbes, Lyons decided to change his perspective on the Bubble from “outside looking in” to “inside looking out.” He wanted to jump on the bubble band wagon, grab a fistful of options and cash in. And so he joined HubSpot as a content producer for their corporate blog. The story unfolds from there.

One particularly sharp and insightful chapter of the book recalls Steve Job’s “Bozo Explosion”:

“Apple CEO Steve Jobs used to talk about a phenomenon called a ‘bozo explosion,’ by which a company’s mediocre early hires rise up through the ranks and end up running departments. The bozos now must hire other people, and of course they prefer to hire bozos. As Guy Kawasaki, who worked with Jobs at Apple, puts it: ‘B players hire C players, so they can feel superior to them, and C players hire D players.’ “

The Bozo Explosion is somewhat unique to tech start-ups, mainly because of some of the aspects of the culture I talked about in a previous column. But I ran into my own version back in my consulting career. And I ran into it in all kinds of companies. I used to call it the Mediocrity Sandwich.

The Mediocrity Sandwich lives in middle management. I used to find that the people at the C Level of the company were usually pretty smart and competent (that said, I did run across some notable exceptions in my time). I also found that the people found on the customer facing front lines of the company were also pretty smart and – more importantly – very aware of the company’s own issues.

But addressing those issues invariably caused a problem. You have senior executives who were certainly capable of fixing the problems, whatever they might be. And you had front line employees who were painfully aware of what the problems were and motivated to implement solutions. But all the momentum of any real problem-solving initiative used to get sucked out somewhere in the middle of the corporate org chart. The problem was the Mediocrity Sandwich.

You see, I don’t think the Bozo Explosion is so much a pyramid – skinny at the top, broad at the bottom – as it is an inverted U-Shaped curve. I think “bozoism” tends to peak in the middle. You certainly have the progression from A’s to B’s to C’s as you move down from the top executive rungs. But then you have the inverse happening as you move from Middle Management to the front lines. The problem is the attrition of competence as you became absorbed into the organization. It’s the Bozo Explosion in reverse.

I usually found there was enough breathing room for competence to survive at the entry level in the organization. There were enough degrees of separation between the front line and the from the bozos in middle management. But as you started to climb the corporate ladder, you kept getting closer to the bozos. Your degree of job frustration began to climb as they had more influence over your day-to-day work. Truly competent players bailed and moved on to a less bozo-infested environment. Those that remained either were born bozos or had “bozo”ness thrust upon them. Either way, as you climbed towards middle management, the bozo factor climbed in lock step. The result? A bell curve of bozos centered in the middle between the C-Level and the front lines.

This creates a poisonous outlook for the long-term prospects of a company. Eventually, the C level executive will age out of their jobs. But who will replace them? The internal farm team is a bunch of bozos. You can recruit from outside, but then the incoming talent inherits a Mediocrity Sandwich. The company begins to rot from within.

For companies to truly change, you have to root out the bozo-rot, but this is easier said than done. If there is one single thing that bozos are good at, it is bozo butt-covering.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

Why Good Tech Companies Keep Being Evil

You’d think we’d have learned by now. But somehow it still comes as a shock to us when tech companies are exposed as having no moral compass.

Slate recently released what it called the “Evil List”  of 30 tech companies compiled through a ballot sent out to journalists, scholars, analysts, advocates and others. Slate asked them which companies were doing business in the way that troubled them most. Spoiler alert: Amazon, Facebook and Google topped the list.  But they weren’t alone. Rounding out the top 10, the list of culprits included Twitter, Apple, Microsoft and Uber.

Which begs the question: Are tech companies inherently evil — like, say a Monsanto or Phillip Morris — or is there something about tech that positively correlates with “evilness”?

I suspect it’s the second of these.  I don’t believe Silicon Valley is full of fundamentally evil geniuses, but doing business as usual at a successful tech firm means there will be a number of elemental aspects of the culture that take a company down the path to being evil.

Cultism, Loyalism and Self-Selection Bias

A successful tech company is a belief-driven meat grinder that sucks in raw, naïve talent on one end and spits out exhausted and disillusioned husks on the other. To survive in between, you’d better get with the program.

The HR dynamics of a tech startup have been called a meritocracy, where intellectual prowess is the only currency.

But that’s not quite right. Yes, you have to be smart, but it’s more important that you’re loyal. Despite their brilliance, heretics are weeded out and summarily turfed, optionless in more ways than one. A rigidly molded group-think mindset takes over the recruitment process, leading to an intellectually homogeneous monolith.

To be fair, high growth startups need this type of mental cohesion. As blogger Paras Chopra said in a post entitled “Why startups need to be cult-like, “The reason startups should aim to be like cults is because communication is impossible between people with different values.” You can’t go from zero to 100 without this sharing of values.

But necessary or not, this doesn’t change the fact that your average tech star up is a cult, with all the same ideological underpinnings. And the more cult-like a culture, the less likely it is that it will take the time for a little ethical navel-gazing.

A Different Definition of Problem Solving

When all you have is a hammer, everything looks like a nail. And for the engineer, the hammer that fixes everything is technology. But, as academic researchers Emanuel Moss and Jacob Metcalf discovered, this brand of technical solutionism can lead to a corporate environment where ethical problems are ignored because they are open-ended, intractable questions. In a previous column I referred to them as “wicked problems.”

As Moss and Metcalf found, “Organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes ‘intractable, like it’s too big of a problem to tackle.’”

If you take this to the extreme, you get the Cambridge Analytica example, where programmer Christopher Wylie was so focused on the technical aspects of the platform he was building that he lost sight of the ethical monster he was unleashing.

A Question of Leadership

Of course, every cult needs a charismatic leader, and this is abundantly true for tech-based companies. Hubris is a commodity not in short supply among the C-level execs of tech.

It’s not that they’re assholes (well, ethical assholes anyway). It’s just that they’re, umm, highly focused and instantly dismissive of any viewpoint that’s not the same as their own. It’s the same issue I mentioned before about the pitfalls of expertise — but on steroids.

I suspect that if you did an ethical inventory of Mark Zuckerberg, Jeff Bezos, Larry Page, Sergey Brin, Travis Kalanik, Reid Hoffman and the rest, you’d find that — on the whole — they’re not horrible people. It’s just that they have a very specific definition of ethics as it pertains to their company. Anything that falls outside those narrowly defined boundaries is either dismissed or “handled” so it doesn’t get in the way of the corporate mission.

Speaking of corporate missions, leaders and their acolytes often are unaware — often intentionally — of the nuances of unintended consequences. Most tech companies develop platforms that allow disruptive new market-based ecosystems to evolve on their technological foundations. Disruption always unleashes unintended social consequences. When these inevitably happen, tech companies generally handle them one of three ways:

  1. Ignore them, and if that fails…
  2. Deny responsibility, and if that fails…
  3. Briefly apologize, do nothing, and then return to Step 1.

There is a weird type of idol worship in tech. The person atop the org chart is more than an executive. They are corporate gods — and those that dare to be disagreeable are quickly weeded out as heretics. This helps explain why Facebook can be pilloried for attacks on personal privacy and questionable design ethics, yet Mark Zuckerberg still snags a 92% CEO approval rating on Glassdoor.com.These fundamental characteristics help explain why tech companies seem to consistently stumble over to the dark side. But there’s an elephant in the room we haven’t talked about. Almost without exception, tech business models encourage evil behavior. Let’s hold that thought for a future discussion.

The Hidden Agenda Behind Zuckerberg’s “Meaningful Interactions”

It probably started with a good intention. Facebook – aka Mark Zuckerberg – wanted to encourage more “Meaningful Interactions”. And so, early last year, Facebook engineers started making some significant changes to the algorithm that determined what you saw in your News Feed. Here are some excerpts from Zuck’s post to that effect:

“The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos — even if they’re entertaining or informative — may not be as good.”

That makes sense, right? It sounds logical. Zuckerberg went on to say how they were changing Facebook’s algorithm to encourage more “Meaningful Interactions.”

“The first changes you’ll see will be in News Feed, where you can expect to see more from your friends, family and groups.

As we roll this out, you’ll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard — it should encourage meaningful interactions between people.”


Let’s fast-forward almost two years and we now see the outcome of that good intention…an ideological landscape with a huge chasm where the middle ground used to be.

The problem is that Facebook’s algorithm naturally favors content from like-minded people. And surprisingly, it doesn’t take a very high degree of ideological homogeneity to create a highly polarized landscape. This shouldn’t have come as a surprise. American Economist Thomas Schelling showed us how easy it was for segregation to happen almost 50 years ago.

The Schelling Model of Segregation was created to demonstrate why racial segregation was such a chronic problem in the U.S., even given repeated efforts to desegregate. The model showed that even when we’re pretty open minded about who our neighbors are, we will still tend to self-segregate over time.

The model works like this. A grid represents a population with two different types of agents: X and O. The square that the agent is in represents where they live. If the agent is satisfied, they will stay put. If they aren’t satisfied, they will move to a new location. The variable here is the level of satisfaction determined by what percentage of their immediate neighbours are the same type of agent as they are. For example, the level of satisfaction might be set at 50%; where the X agent needs at least 50% of its neighbours to also be of type X. (If you want to try the model firsthand, Frank McCown, a Computer Science professor at Harding University, created an online version.)

The most surprising thing that comes out of the model is that this threshold of satisfaction doesn’t have to be set very high at all for extensive segregation to happen over time. You start to see significant “clumping” of agent types at percentages as low as 25%. At 40% and higher, you see sharp divides between the X and O communities. Remember, even at 40%, that means that Agent X only wants 40% of their neighbours to also be of the X persuasion. They’re okay being surrounded by up to 60% Os. That is much more open-minded than most human agents I know.

Now, let’s move the Schelling Model to Facebook. We know from the model that even pretty open-minded people will physically segregate themselves over time. The difference is that on Facebook, they don’t move to a new part of the grid, they just hit the “unfollow” button. And the segregation isn’t physical – it’s ideological.

This natural behavior is then accelerated by the Facebook “Meaningful Encounter” Algorithm which filters on the basis of people you have connected with, setting in motion an ever-tightening spiral that eventually restricts your feed to a very narrow ideological horizon. The resulting cluster then becomes a segment used for ad targeting. We can quickly see how Facebook both intentionally built these very homogenous clusters by changing their algorithm and then profits from them by providing advertisers the tools to micro target them.

Finally, after doing all this, Facebook absolves themselves of any responsibility to ensure subversive and blatantly false messaging isn’t delivered to these ideologically vulnerable clusters. It’s no wonder comedian Sascha Baron Cohen just took Zuck to task, saying “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’”. 

In rereading Mark Zuckerberg’s post from two years ago, you can’t help but start reading between the lines. First of all, there is mounting evidence that disproves his contention that meaningful social media encounters help your well-being. It appears that quitting Facebook entirely is much better for you.

And secondly, I suspect that – just like his defence of running false and malicious advertising by citing free speech – Zuck has an not-so-hidden agenda here. I’m sure Zuckerberg and his Facebook engineers weren’t oblivious to the fact that their changes to the algorithm would result in nicely segmented psychographic clusters that would be like catnip to advertisers – especially political advertisers. They were consolidating exactly the same vulnerabilities that were exploited by Cambridge Analytica.

They were building a platform that was perfectly suited to subvert democracy.

Running on Empty: Getting Crushed by the Crush It Culture

“Nobody ever changed the world on 40 hours a week.”

Elon Musk

Those damned Protestants and their work ethic. Thanks to them, unless you’re willing to put in a zillion hours a week, you’re just a speed bump on the road to all that is good in the world. Take Mr. Musk, for example. If you happen to work at Tesla, or SpaceX, or the Boring Company, Elon has figured out what your average work week should be, “(It) Varies per person, but about 80 sustained, peaking above 100 at times. Pain level increases exponentially above 80.”

“Pain level increases exponentially above 8o”? WTF, Mr. Musk!

But he’s not alone. Google famously built their Mountainview campus so employees never had to go home. Alibaba Group founder Jack Ma calls the intense work culture at his company a “huge blessing.” He calls it the “996” work schedule, 9 am to 9 pm 6 days a week. That’s 72 hours, if you’re counting. But even that wouldn’t cut it if you work for Elon Musk. You’d be a dead beat.

This is the “Crush It” culture, where long hours equate to dedication and – by extension – success. No pain, no gain.

We spend lots of time talking about the gain – so let me spend just one column talking about the pain. Pain such as mental illness, severe depression, long term disabilities and strokes. Those that overwork are more likely to over-eat, smoke, drink excessively and develop other self-destructive habits.

You’re not changing the world. You’re shortening your life. The Japanese call it karoshi; death by overwork.

Like so many things, this is another unintended consequence of a digitally mediated culture. Digital speeds everything up. But our bodies – and brains – aren’t digital. They burn out if they move too fast – or too long.

Overwork as a sign of superior personal value is a fairly new concept in the span of human history. It came from the Puritans who settled in New England. They believed that those that worked hard at their professions were those chosen to get into heaven. The more wealth you amassed from your work, the more evidence there was that you were one of the chosen.

Lately, the creeping Capitalist culture of over-working has most firmly embedded itself in the tech industry. There, the number of hours you work has become a proxy of your own worth. A twisted type of machismo has evolved and has trapped us all into thinking that an hour not spent at our jobs is an hour wasted. We are looked down upon for wanting some type of balance in our lives.

Unfortunately for the Musks and Mas and other modern-day task masters – the biology just doesn’t support their proposed work schedules.

First, our brains need rest. Back in the 18th century when those Puritans proved their worth through work, earning a living was usually a physical endeavour. The load of overwork was spread amongst the fairly simple mechanical machinery of our own bodies. Muscles got sore. Joints ached. But they recovered.

The brain is a much more complex beast. When it gets overworked, it loses its executive ability to focus on the task at hand. When your work takes place on a desktop or laptop where there are unlimited diversions just a click away, you suddenly find yourself 45 minutes into an unplanned YouTube marathon or scrolling through your Facebook feed. It becomes a downward spiral that benefits no one.

An overworked mind also loses its ability to spin down in the evening so you can get an adequate amount of sleep. When your co-workers start boasting of being able to function on just 3 or 4 hours of sleep – they are lying. They are lying to you, but worse, they are lying to themselves. Very few of us can function adequately on less than 7 or 8 hours of sleep. For the rest of us, the negative effects start to accumulate. A study found that sleep deprivation has the same impact as drinking too much. Those that were getting less than 7 hours of sleep faired the same or worse on a cognitive test as those that had a 0.05% blood alcohol level. The legal limit in most states is 0.08%.

Finally, in an essay on Medium, Rachel Thomas points out that the Crush It Culture is discriminatory. Those that have a disability or chronic illness simply have fewer hours in the day to devote to work. They need time for medical support and usually require more sleep. In an industry like Tech where there is an unhealthy focus on the number of hours worked, these workers – which Thomas says makes up at least 30% of the total workforce – are shut out.

The Crush It Culture is toxic. The science simply doesn’t support it. The only ones evangelizing it are those that directly benefit from this modernized version of feudalism.  It’s time to call Bullshit on them.

Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.