It’s the Buzz That Will Kill You

If you choose to play in the social arena, you have to accept that the typical peaks and valleys of business success can suddenly become impossibly steep.

In social media networks, your brand message is whatever meme happens to emerge from the collective activity of this connected market. Marketers have little control — and sometimes, they have no control. At best, all they can do is react by throwing another carefully crafted meme into the social-sphere and hope it picks up some juice and is amplified through the network.

That’s exactly what happened to Peloton in the past week and a half.

On Dec. 9, the HBO Max sequel to “Sex and the City” killed off a major character — Chris Noth’s Mr. Big — by giving him a heart attack after his one thousandth Peloton ride. Apparently, HBO Max gave Peloton no advance warning of this branding back hand.

On Dec. 10, according to Axios,  there was a dramatic spike in social interactions talking about Mr. Big’s last ride, peaking near 80 thousand. As you can imagine, the buzz was not good for Peloton’s business.

On Dec. 12, Peloton struck back with its own ad, apparently produced in just 24 hours by Ryan Reynold’s Maximum Effort agency. This turned the tide of the social buzz. Again, according to data from Newswhip and Axios, social media mentions peaked. This time, they were much more positive toward the Peloton brand.

It should be all good — right? Not so fast. On Dec 16, two sexual assault allegations were made against Chris Noth, chronicled in The Hollywood Reporter. Peloton rapidly scrubbed its ad campaign. Again, the social sphere lit up and Peloton was forced back into defensive mode.

Now, you might call all this marketing froth, but that’s  the way it is in our hyper-connected world. You just have to dance the dance — be nimble and respond.

But my point is not about the marketing side of this of this brouhaha – which has been covered to death, at least at MediaPost (sorry, pardon the pun.) I’m more interested  in what happens to the people who have some real skin in this particular game, whose lives depend on the fortunes of the Peloton brand. Because all this froth does have some very IRL consequences.

Take Peloton’s share price, for one.

The day before the HBO show aired, Peloton’s shares were trading at $45.91. The next day, they tumbled 16%. to $38.51.

And that’s just one chapter in the ongoing story of Peloton’s stock performance, which has been a hyper-compressed roller coaster ride, with the pandemic and a huge amount of social media buzz keeping the foot firmly on the accelerator of stock performance through 2020, but then subsequently dropping like a rock for most of 2021. After peaking as high as $162 a share exactly a year ago, the share price is back down to spitting distance of its pre-pandemic levels.

Obviously, Peloton’s share price is not just dependent on the latest social media meme. There are business fundamentals to consider as well.

Still, you have to accept that a more connected meme-market is going to naturally accelerate the speed of business upticks and declines. Peloton signed up for this dance — and  when you do that, you have to accept all that comes with it.

In terms of the real-world consequences of betting on the buzz, there are three “insider” groups (not including customers) that will be affected: the management, the shareholders and the employees. The first of these supposedly went into this with their eyes open. The second of these also made a choice. If they did their due diligence before buying the stock, they should have known what to expect. But it’s the last of these — the employees — that I really feel for.

With ultra-compressed business cycles like Peloton has experienced, it’s tough for employees to keep up. On the way up the peak, the company is running ragged trying to scale for hyper-growth. If you check employee review sites like Glassdoor.com, there are tales of creaky recruitment processes not being able to keep up. But at least the ride up is exciting. The ride down is something quite different.

In psychological terms, there is something called the locus of control. These are the things you feel you have at least some degree of control over. And there is an ever-increasing body of evidence that shows that locus of control and employee job satisfaction are strongly correlated. No one likes to be the one constantly waiting for someone else to drop the other shoe. It just ramps up your job stress. Granted, job stress that comes with big promotions and generous options on a rocket ship stock can perhaps be justified. But stress that’s packaged with panicked downsizing and imminent layoffs is not a fun employment package for anyone.

That’s the current case at Peloton. On Nov. 5 it announced an immediate hiring freeze. And while there’s been no official announcement of layoffs that I could find, there have been rumors of such posted to the site thelayoff.com.  This is not a fun environment for anyone to function in. Here’s what one post said: “I left Peloton a year ago when I realized it was morphing into the type of company I had no intention of working for.”

We have built a business environment that is highly vulnerable to buzz. And as Peloton has learned, what the buzz giveth, the buzz can also taketh away.

When Social Media Becomes the Message

On Nov. 23, U.K. cosmetics firm Lush said it was deactivating its Instagram, Facebook, TikTok and Snapchat accounts until the social media environment “is a little safer.” And by a “safer” environment, the company didn’t mean for advertisers, but for consumers. Jack Constantine, chief digital officer and product inventor at Lush, explains in an interview with the BBC:

“[Social media channels] do need to start listening to the reality of how they’re impacting people’s mental health and the damage that they’re causing through their craving for the algorithm to be able to constantly generate content regardless of whether it’s good for the users or not.”

This was not an easy decision for Lush. It came with the possibility of a substantial cost to its business, “We already know that there is potential damage of £10m in sales and we need to be able to gain that back,” said Constantine. “We’ve got a year to try to get that back, and let’s hope we can do that.”

In effect, Lush is rolling the dice on a bet based on the unpredictable network effects of social media. Would the potential loss to its bottom line be offset by the brand uptick it would receive by being true to its core values? In talking about Lush’s move on the Wharton Business Daily podcast, marketing lecturer Annie Wilson pointed out the issues in play here:

“There could be positive effects on short-term loyalty and brand engagement, but it will be interesting to see the long-term effect on acquiring new consumers in the future.”

I’m not trying to minimize Lush’s decision here by categorizing it as a marketing ploy. The company has been very transparent about how hard it’s been to drop — even temporarily — Facebook and its other properties from the Lush marketing mix. The brand had previously closed several of its UK social media accounts, but eventually found itself “back on the channels, despite the best intentions.”

You can’t overstate how fundamental a decision this is for a profit-driven business. But I’m afraid Lush is probably an outlier. The brand is built on making healthy choices. Lush eventually decided it had to stay true to that mission even if it hurts the bottom line.

Other businesses are far from wearing their hearts on their sleeves to the same extent as Lush. For every Lush that’s out there, there are thousands that continue to feed their budgets to Facebook and its properties, even though they fundamentally disagree with the tactics of the channel.

There has been pushback against these tactics before. In July of 2020, 1000 advertisers joined the #StopHateForProfit Boycott against Facebook. That sounds impressive – until you realize that Facebook has 9 million clients. The boycotters represented just over .01% of all advertisers. Even with the support of other advertisers who didn’t join the boycott but still scaled back their ad spend, it only had a fleeting effect on Facebook’s bottom line. Almost all the advertisers eventually returned after the boycott.

As The New York Times reported at the time, the damage wasn’t so much to Facebook’s pocketbook as to its reputation. Stephen Hahn-Griffiths, the executive vice president of the public opinion analysis company RepTrak, wrote in a follow-up post,

“What could really hurt Facebook is the long-term effect of its perceived reputation and the association with being viewed as a publisher of ‘hate speech’ and other inappropriate content.”

Of course, that was all before the emergence of a certain Facebook data engineer by the name of Frances Haugen. The whistleblower released thousands of internal documents to the Wall Street Journal this past fall. It went public in September of this year in a series called “The Facebook Files.” If we had any doubt about the culpability of Zuckerberg et al, this pretty much laid that to rest.

Predictably, after the story broke, Facebook made some halfhearted attempts to clean up its act by introducing new parental controls on Instagram and Facebook. This follows the typical Facebook handbook for dealing with emerging shit storms: do the least amount possible, while talking about it as much as possible. It’s a tactic known as “purpose-washing.”

The question is, if this is all you do after a mountain of evidence points to you being truly awful, how sincere are you about doing the right thing? This puts Facebook in the same category as Big Tobacco, and that’s pretty crappy company to be in.

Lush’s decision to quit Facebook also pinpoints an interesting dilemma for advertisers: What happens when an advertising platform that has been effective in attracting new customers becomes so toxic that it damages your brand just by being on it? What happens when, as Marshall McLuhan famously said, the medium becomes the message?

Facebook is not alone with this issue. With the systematic dismantling of objective journalism, almost every news medium now carries its own message. This is certainly true for channels like Fox News. By supporting these platforms with advertising, advertisers are putting a stamp of approval on those respective editorial biases and — in Fox’s case — the deliberate spreading of misinformation that has been shown to have a negative social cost.

All this points to a toxic cycle becoming more commonplace in ad-supported media: The drive to attract and effectively target an audience leads a medium to embrace questionable ethical practices. These practices then taint the platform itself, leading to it potentially becoming brand-toxic. The advertisers then must choose between reaching an available audience that can expand its business, or avoiding the toxicity of the platform. The challenge for the brand then becomes a contest to see how long it can hold its nose while it continues to maximize sales and profits.

For Lush, the scent of Facebook’s bullshit finally grew too much to bear — at least for now.

The Unusual Evolution of the Internet

The Internet we have today evolved out of improbability. It shouldn’t have happened like it did. It evolved as a wide-open network forged by starry-eyed academics and geeks who really believed it might make the world better. It wasn’t supposed to win against walled gardens like Compuserve, Prodigy and AOL — but it did. If you rolled back the clock, knowing what we know now, you could be sure it would never play out the same way again.

To use the same analogy that Eric Raymond did in his now-famous essay on the development of Linux, these were people who believed in bazaars rather than cathedrals. The internet was cobbled together to scratch an intellectual and ethical itch, rather than a financial one.

But today, as this essay in The Atlantic by Jonathan Zittrain warns us, the core of the internet is rotting. Because it was built by everyone and no one, all the superstructure that was assembled on top of that core is teetering. Things work, until they don’t: “The internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks.”

The problem is, it’s no one’s job to make sure those bricks stay in place.

Zittrain talks about the holes in humanity’s store of knowledge. But there’s another thing about this evolution that is either maddening or magical, depending on your perspective: It was never built with a business case in mind.

Eventually, commerce pipes were retrofitted into the whole glorious mess, and billions managed to be made. Google alone has managed to pull over a trillion dollars in revenue in less than 20 years by becoming the de facto index to the world’s most haphazard library of digital stuff. Amazon went one better, using the Internet to reinvent humanity’s marketplace and pulling in $2 trillion in revenue along the way.

But despite all this massive monetization, the benefactors still at least had to pay lip service to that original intent: the naïve belief that technology could make us better, and  that it didn’t just have to be about money.

Even Google, which is on its way to posting $200 billion in revenue, making it the fifth biggest media company in the world (after Netflix, Disney, Comcast, and AT&T), stumbled on its way to making a buck. Perhaps it’s because its founders, Larry Page and Sergey Brin, didn’t trust advertising. In their original academic paper, they said that “advertising-funded search engines will inherently be biased toward the advertisers and away from the needs of consumers.”  Of course they ultimately ended up giving in to the dark side of advertising. But I watched the Google user experience closely from 2003 to 2011, and that dedication to the user was always part of a delicate balancing act that was generally successful.

But that innocence of the original Internet is almost gone, as I noted in a recent post. And there are those who want to make sure that the next thing — whatever it is — is built on a framework that has monetization built in. It’s why Mark Zuckerberg is feverishly hoping that his company can build the foundations of the Metaverse. It’s why Google is trying to assemble the pipes and struts that build the new web. Those things would be completely free of the moral — albeit naïve — constraints that still linger in the original model. In the new one, there would only be one goal: making sure shareholders are happy.

It’s also natural that many of those future monetization models will likely embrace advertising, which is, as I’ve said before, the path of least resistance to profitability.

We should pay attention to this. The very fact that the Internet’s original evolution was as improbable and profit-free as it was puts us in a unique position today. What would it look like if things had turned out differently, and the internet had been profit-driven from day one? I suspect it might have been better-maintained but a lot less magical, at least in its earliest iterations.

Whatever that new thing is will form a significant part of our reality. It will be even more foundational and necessary to us than the current internet. We won’t be able to live without it. For that reason, we should worry about the motives that may lie behind whatever “it” will be.

The Relationship between Trust and Tech: It’s Complicated

Today, I wanted to follow up on last week’s post about not trusting tech companies with your privacy. In that post, I said, “To find a corporation’s moral fiber, you always, always, always have to follow the money.”

A friend from back in my industry show days — the always insightful Brett Tabke — reached out to me to comment, and mentioned that the position taken by Apple in the current privacy brouhaha with Facebook is one of convenience, especially this “holier-than-thou” privacy stand adopted by Tim Cook and Apple.

“I really wonder though if it is a case of do-the-right-thing privacy moral stance, or one of convenience that supports their ecosystem, and attacks a competitor?” he asked.

It’s hard to argue against that. As Brett mentioned, Apple really can’t lose by “taking money out of a side-competitors pocket and using it to lay more foundational corner stones in the walled garden, [which] props up the illusion that the garden is a moral feature, and not a criminal antitrust offence.”

But let’s look beyond Facebook and Apple for a moment. As Brett also mentioned to me, “So who does a privacy action really impact more? Does it hit Facebook or ultimately Google? Facebook is just collateral damage here in the real war with Google. Apple and Google control their own platform ecosystems, but only Google can exert influence over the entire web. As we learned from the unredacted documents in the States vs Google antitrust filings, Google is clearly trying to leverage its assets to exert that control — even when ethically dubious.”

So, if we are talking trust and privacy, where is Google in this debate? Given the nature of Google’s revenue stream, its stand on privacy is not quite as blatantly obvious (or as self-serving) as Facebook’s. Both depend on advertising to pay the bills, but the nature of that advertising is significantly different.

57% of Alphabet’s (Google’s parent company) annual $182-billion revenue stream still comes from search ads, according to its most recent annual report. And search advertising is relatively immune from crackdowns on privacy.

When you search for something on Google, you have already expressed your intent, which is the clearest possible signal with which you can target advertising. Yes, additional data taken with or without your knowledge can help fine-tune ad delivery — and Google has shown it’s certainly not above using this  — but Apple tightening up its data security will not significantly impair Google’s ability to make money through its search revenue channel.

Facebook’s advertising model, on the other hand, targets you well before any expression of intent. For that reason, it has to rely on behavioral data and other targeting to effectively deliver those ads. Personal data is the lifeblood of such targeting. Turn off the tap, and Facebook’s revenue model dries up instantly.

But Google has always had ambitions beyond search revenue. Even today, 43% of its revenue comes from non-search sources. Google has always struggled with the inherently capped nature of search-based ad inventory. There are only so many searches done against which you can serve advertising. And, as Brett points out, that leads Google to look at the very infrastructure of the web to find new revenue sources. And that has led to signs of a troubling collusion with Facebook.

Again, we come back to my “follow the money” mantra for rooting out rot in the system. And in this case, the money we’re talking about is the premium that Google skims off the top when it determines which ads are shown to you. That premium depends on Google’s ability to use data to target the most effective ads possible through its own “Open Bidding” system. According to the unredacted documents released in the antitrust suit, that premium can amount to 22% to 42% of the ad spend that goes through that system.

In summing up, it appears that if you want to know who can be trusted most with your data, it’s the companies that don’t depend on that data to support an advertising revenue model. Right now, that’s Apple. But as Brett also pointed out, don’t mistake this for any warm, fuzzy feeling that Apple is your knight in shining armour: “Apple has shown time and time again they are willing to sacrifice strong desires of customers in order to make money and control the ecosystem. Can anyone look past headphone jacks, Macbook jacks, or the absence of Macbook touch screens without getting the clear indication that these were all robber-baronesque choices of a monopoly in action? Is so, then how can we go ‘all in’ on privacy with them just because we agree with the stance?”

The Tech Giant Trust Exercise

If we look at those that rule in the Valley of Silicon — the companies that determine our technological future — it seems, as I previously wrote,  that Apple alone is serious about protecting our privacy. 

MediaPost editor in chief Joe Mandese shared a post late last month about how Apple’s new privacy features are increasingly taking aim at the various ways in which advertising can be targeted to specific consumers. The latest victim in those sights is geotargeting.

Then Steve Rosenbaum mentioned last week that as Apple and Facebook gird their loins and prepare to do battle over the next virtual dominion — the metaverse — they are taking two very different approaches. Facebook sees this next dimension as an extension of its hacker mentality, a “raw, nasty networker of spammers.” Apple is, as always, determined to exert a top-down restriction on who plays in its sandbox, only welcoming those who are willing to play by its rules. In that approach, the company is also signaling that it will take privacy in the metaverse seriously. Apple CEO Tim Cook said he believes “users should have the choice over the data that is being collected about them and how it’s used.”

Apple can take this stand because its revenue model doesn’t depend on advertising. To find a corporation’s moral fiber, you always, always, always have to follow the money. Facebook depends on advertising for revenue. And it has repeatedly shown it doesn’t really give a damn about protecting the privacy of users. Apple, on the other hand, takes every opportunity to unfurl the privacy banner as its battle standard because its revenue stream isn’t really impacted by privacy.

If you’re looking for the rot at the roots of technology, a good place to start is at anything that relies on advertising. In my 40 years in marketing, I have come to the inescapable conclusion that it is impossible for business models that rely on advertising as their primary source of revenue to stay on the right side of privacy concerns. There is an inherent conflict that cannot be resolved. In a recent earnings call,  Facebook CEO Mark Zuckerberg said it in about the clearest way it could be said, “As expected, we did experience revenue headwinds this quarter, including from Apple’s [privacy rule] changes that are not only negatively affecting our business, but millions of small businesses in what is already a difficult time for them in the economy.”

Facebook has proven time and time again that when the need for advertising revenue runs up against a question of ethical treatment of users, it will always be the ethics that give way.

It’s also interesting that Europe is light years ahead of North America in introducing legislation that protects privacy. According to one Internet Privacy Ranking study, four of the five top countries for protecting privacy are in Northern Europe. Australia is the fifth. My country, Canada, shares these characteristics. We rank seventh. The US ranks 18th.

There is an interesting corollary here I’ve touched on before. All these top-ranked countries are social democracies. All have strong public broadcasting systems. All have a very different relationship with advertising than the U.S. We that live in these countries are not immune from the dangers of advertising (this is certainly true for Canada), but our media structure is not wholly dependent on it. The U.S., right from the earliest days of electronic media, took a different path — one that relied almost exclusively on advertising to pay the bills.

As we start thinking about things like the metaverse or other forms of reality that are increasingly intertwined with technology, this reliance on advertising-funded platforms is something we must consider long and hard. It won’t be the companies that initiate the change. An advertising-based business model follows the path of least resistance, making it the shortest route to that mythical unicorn success story. The only way this will change will be if we — as users — demand that it changes.

And we should  — we must — demand it. Ad-based tech giants that have no regard for our personal privacy are one of the greatest threats we face. The more we rely on them, they more they will ask from us.

Whatever Happened to the Google of 2001?

Having lived through it, I can say that the decade from 2000 to 2010 was an exceptional time in corporate history. I was reminded of this as I was reading media critic and journalist Ken Auletta’s book, “Googled, The End of the World as We Know It.” Auletta, along with many others, sensed a seismic disruption in the way media worked. A ton of books came out on this topic in the same time frame, and Google was the company most often singled out as the cause of the disruption.

Auletta’s book was published in 2009, near the end of this decade, and it’s interesting reading it in light of the decade plus that has passed since. There was a sort of breathless urgency in the telling of the story, a sense that this was ground zero of a shift that would be historic in scope. The very choice of Auletta’s title reinforces this: “The End of the World as We Know It.”

So, with 10 years plus of hindsight, was he right? Did the world we knew end?

Well, yes. And Google certainly contributed to this. But it probably didn’t change in quite the way Auletta hinted at. If anything, Facebook ended up having a more dramatic impact on how we think of media, but not in a good way.

At the time, we all watched Google take its first steps as a corporation with a mixture of incredulous awe and not a small amount of schadenfreude. Larry Page and Sergey Brin were determined to do it their own way.

We in the search marketing industry had front row seats to this. We attended social mixers on the Google campus. We rubbed elbows at industry events with Page, Brin, Eric Schmidt, Marissa Mayer, Matt Cutts, Tim Armstrong, Craig Silverstein, Sheryl Sandberg and many others profiled in the book. What they were trying to do seemed a little insane, but we all hoped it would work out.

We wanted a disruptive and successful company to not be evil. We welcomed its determination — even if it seemed naïve — to completely upend the worlds of media and advertising. We even admired Google’s total disregard for marketing as a corporate priority.

But there was no small amount of hubris at the Googleplex — and for this reason, we also hedged our hopeful bets with just enough cynicism to be able to say “we told you so” if it all came crashing down.

In that decade, everything seemed so audacious and brashly hopeful. It seemed like ideological optimism might — just might — rewrite the corporate rulebook. If a revolution did take place, we wanted to be close enough to golf clap the revolutionaries onward without getting directly in the line of fire ourselves.

Of course, we know now that what took place wasn’t nearly that dramatic. Google became a business: a very successful business with shareholders, a grown-up CEO and a board of directors, but still a business not all that dissimilar to other Fortune 100 examples. Yes, Google did change the world, but the world also changed Google. What we got was more evolution than revolution.

The optimism of 2000 to 2010 would be ground down in the next 10 years by the same forces that have been driving corporate America for the past 200 years: the need to expand markets, maximize profits and keep shareholders happy. The brash ideologies of founders would eventually morph to accommodate ad-supported revenue models.

As we now know, the world was changed by the introduction of ways to make advertising even more pervasively influential and potentially harmful. The technological promise of 20 years ago has been subverted to screw with the very fabric of our culture.

I didn’t see that coming back in 2001. I probably should have known better.

The Terrors of New Technology

My neighbour just got a new car. And he is terrified. He told me so yesterday. He has no idea how the hell to use it. This isn’t just a new car. It’s a massive learning project that can intimidate the hell out of anyone. It’s technology run amok. It’s the canary in the coal mine of the new world we’re building.

Perhaps – just perhaps – we should be more careful in what we wish for.

Let me provide the back story. His last car was his retirement present to himself, which he bought in 2000. He loved the car. It was a hard top convertible. At the time he bought it it was state of the art. But this was well before the Internet of Things and connected technology. The car did pretty much what you expected it to. Almost anyone could get behind the wheel and figure out how to make it go.

This year, under much prompting from his son, he finally decided to sell his beloved convertible and get a new car. But this isn’t just any car. It is a high-end electric sports car. Again, it is top of the line. And it is connected in pretty much every way you could imagine, and in many ways that would never cross any of our minds.

My neighbour has had this new car for about a week. And he’s still afraid to drive it anywhere. “Gord,” he said, “the thing terrifies me. I still haven’t figured out how to get it to open my garage door.” He has done online tutorials. He has set up a Zoom session with the dealer to help him navigate the umpteen zillion screens that show up on the smart display. After several frustrating experiments, he has learned he needs to pair it with his wifi system at home to get it to recharge properly. No one could just hop behind the wheel and drive it. You would have to sign up for an intensive technology boot camp before you were ready to climb a near-vertical learning curve. The capabilities of this car are mind boggling. And that’s exactly the problem. It’s damned near impossible to do anything with a boggled mind.

The acceptance of new technology has generated a vast body of research. I myself did an exhaustive series of blog posts on it back in 2014. Ever since sociologist Everett Rogers did his seminal work on the topic back in 1962 we have known that there are hurdles to overcome in grappling with something new, and we don’t all clear the hurdles at the same rate. Some of us never clear them at all.

But I also suspect that the market, especially at the high end, have become so enamored with embedding technology that they have forgotten how difficult it might be for some of us to adopt that technology, especially those of us of a certain age.

I am and always have been an early adopter. I geek out on new technology. That’s probably why my neighbour has tapped me to help him figure out his new car. I’m the guy my family calls when they can’t get their new smartphone to work. And I don’t mind admitting I’m slipping behind. I think we’re all the proverbial frogs in boiling water. And that water is technology. It’s getting harder and harder just to use the new shit we buy.

Here’s another thing that drives me batty about technology. It’s a constantly moving target. Once you learn something, it doesn’t stay learnt. It upgrades itself, changes platforms or becomes obsolete. Then you have to start all over again.

Last year, I started retrofitting our home to be a little bit more smart. And in the space of that year, I have sensors that mysteriously go offline, hubs that suddenly stop working, automation routines that are moodier than a hormonal teenager and a lot of stuff that just fits into the “I have no idea” category. When it all works it’s brilliant. I remember that one day – it was special. The other 364 have been a pain in the ass of varying intensity. And that’s for me, the tech guy. My wife sometimes feels like a prisoner in her own home. She has little appreciation for the mysterious gifts of technology that allow me to turn on our kitchen lights when we’re in Timbuktu (should we ever go there and if we can find a good wifi signal).

Technology should be a tool. It should serve us, not hold us slave to its whims. It would be so nice to be able to just make coffee from our new coffee maker, instead of spending a week trying to pair it with our toaster so breakfast is perfectly synchronized.

Oops, got to go. My neighbour’s car has locked him in his garage.

Re-engineering the Workplace

What happens when over 60,000 Microsoft employees are forced to work from home because of a pandemic? Funny you should ask. Microsoft just came out with a large scale study that looks at exactly that question. The good news is that employees feel more included and supported by their managers than ever. But there is bad news as well:

“Our results show that firm-wide remote work caused the collaboration network of workers to become more static and siloed, with fewer bridges between disparate parts. Furthermore, there was a decrease in synchronous communication and an increase in asynchronous communication. Together, these effects may make it harder for employees to acquire and share new information across the network.”

To me, none of this is surprising. On a much smaller scale, we experienced exactly this when we experimented with a virtual workplace a decade ago. In fact, this virtually echoes the pros and cons of a virtual workplace that I have talked about in other previous posts, particularly the two (one, two) that dealt with the concept of “burstiness” – those magical moments of collaborative creativity experienced when a room full of people get “on a roll.”

What this study does do, however, is provide empirical evidence to back up my hunches. There is nothing like a global pandemic to allow the recruitment of a massive sample to study the impact of working from home.

In many, many aspects of our society, COVID was a game changer. It forcefully pushed us along the adoption curve, mandating widescale adoption of technologies that we probably would have been much happier to simply dabble in. The virtual workplace was one of these, but there were others.

Yet this example in particular, because of the breadth of its impact, gives us an insightful glimpse into one particular trend: we are increasingly swapping the ability to physically be together for a virtual connection mediated through technology. The first of these is a huge part of our evolved social strategies that are hundreds of thousands of years in the making. The second is barely a couple of decades old. There are bound to be consequences, both intended and unintended.

In today’s post, I want to take another angle to look at the pros and cons of a virtual workplace – by exploring how music has been made over the past several decades.

Supertramp and Studio Serendipity

My brother-in-law is a walking encyclopedia of music trivia. He put me on to this particular tidbit from one of my favorite bands of the 70’s and 80’s – Supertramp.

The band was in the studio working on their Breakfast in America album. In the corner of the studio, someone was playing a handheld video game during a break in the recording: Mattel’s Football. The game had a distinctive double beep on your fourth down. Roger Hodgson heard this and now that same sound can be heard at the 3:24 mark of The Logical Song, just after the lyric “d-d-digital”.

This is just one example of what I would call “Studio Serendipity.” For every band, every album, every song that was recorded collaboratively in the studio, there are examples like this of creativity that just sprang from people being together. It is an example of that “burstiness” I was talking about in my previous posts.

Billie Eilish and the Virtual Studio

But for this serendipity to even happen, you had to get into a recording studio. And the barriers to doing that were significant. You had to get a record deal – or – if you were going independent, save up enough money to rent a studio.

For the other side of the argument, let’s talk about Billie Eilish. Together with her brother Finneas, these two embody virtual production. We first heard about Billie in 2015 when they recorded Ocean Eyes in a bedroom in the family’s tiny LA Bungalow and uploaded it to SoundCloud. Billie was 14 at the time. The song went viral overnight and it did lead to a record deal, but their breakout album, When We All Fall Asleep, Where Do We Go?, was recorded in that same bedroom.

Digital technology dismantled the vertical hierarchy of record labels and democratized the industry. If that hadn’t happened, we might never have heard of Billie Eilish.

The Best of Both Worlds

Choosing between virtual and physical workplaces is not a binary choice. In the two examples I gave, creativity was a hybrid that came from both solitary inspiration and collaborative improvisation. The first thrives in a virtual workplace and the second works best when we’re physically together. There are benefits to both models, and these benefits are non-exclusive.

A hybrid model can give you the best of both worlds, but you have to take into account a number of things that might be a stretch for the typical HR policies  – things like evolutionary psychology, cognition and attentional focus, non-verbal communication strategies and something that neuroscientist Antonio Damasio calls “somatic markers.”  According to Damasio, we think as much with our bodies as we do with our brains.

Our performance in anything is tied to our physical surroundings. And when we are looking to replace a physical workplace with a virtual substitute, we have to appreciate the significance this has on us subconsciously.

Re-engineering Communication

Take communication, for example. We may feel that we have more ways than ever to communicate with our colleagues, including an entire toolbox of digital platforms. But none of them account for this simple fact: the majority of our communication is non-verbal. We communicate with our eyes, our hands, our bodies, our expression and the tone of our voice. Trying to squeeze all this through the trickle of bandwidth that technology provides, even when we have video available, is just going to produce frustration. It is no substitute for being in the same room together, sharing the same circumstances. It would be like trying to race in a car with an engine where only one cylinder was working.

This is perhaps the single biggest drawback to the virtual workplace – this lack of “somatic” connection – the shared physical bond that underlies some much of how we function. When you boil it down, it is the essential ingredient for “burstiness.” And I just don’t think we have a technological substitute for it – not at this point, anyway.

But the same person who discovered burstiness does have one rather counterintuitive suggestion. If we can’t be in the same room together, perhaps we have to “dumb down” the technology we use. Anita Williams Wooley suggests the good, old-fashioned phone call might truly be the next best thing to being there.

Getting Bitch-Slapped by the Invisible Hand

Adam Smith first talked about the invisible hand in 1759. He was looking at the divide between the rich and the poor and said, in essence, that “greed is good.”

Here is the exact wording:

“They (the rich) are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society.”

The effect of “the hand” is most clearly seen in the wide-open market that emerges after established players collapse and make way for new competitors riding a wave of technical breakthroughs. Essentially, it is a cycle.

But something is happening that may never have happened before. For the past 300 years of our history, the one constant has been the trend of consumerism. Economic cycles have rolled through, but all have been in the service of us having more things to buy.

Indeed, Adam Smith’s entire theory depends on greed: 

“The rich … consume little more than the poor, and in spite of their natural selfishness and rapacity, though they mean only their own conveniency, though the sole end which they propose from the labours of all the thousands whom they employ, be the gratification of their own vain and insatiable desires, they divide with the poor the produce of all their improvements.”

It’s the trickle-down theory of gluttony: Greed is a tide that raises all boats.

The Theory of The Invisible Hand assumes there are infinite resources available. Waste is necessarily built into the equation. But we have now gotten to the point where consumerism has been driven past the planet’s ability to sustain our greedy grasping for more.

Nobel-Prize-winning economist Joseph Stiglitz, for one, recognized that environmental impact is not accounted for with this theory. Also, if the market alone drives things like research, it will inevitably become biased towards benefits for the individual and not the common good.

There needs to be a more communal counterweight to balance the effects of individual greed. Given this, the new age of consumerism might look significantly different.

There is one outcome of market driven-economics that is undeniable: All the power lies in the connection between producers and consumers. Because the world has been built on the predictable truth of our always wanting more, we have been given the ability to disrupt that foundation simply by changing our value equation: buying for the greater good rather than our own self-interest.

I’m skeptical that this is even possible.

It’s a little daunting to think that our future survival relies on our choices as consumers. But this is the world we have made. Consumption is the single greatest driver of our society. Everything else is subservient to it.

Government, science, education, healthcare, media, environmentalism: All the various planks of our societal platform rest on the cross-braces of consumerism. It is the one behavior that rules all the others. 

This becomes important to think about because this shit is getting real — so much faster than we thought possible.

I write this from my home, which is about 100 miles from the village of Lytton, British Columbia. You might have heard it mentioned recently. On June 29, Lytton reported the highest temperature ever recorded in Canada  a scorching 121.3 degrees Fahrenheit (49.6 degrees C for my Canadian readers). That’s higher than the hottest temperature ever recorded in Las Vegas. Lytton is 1,000 miles north of Las Vegas.

As I said, that was how Lytton made the news on June 29. But it also made the news again on June 30. That was when a wildfire burned almost the entire town to the ground.

In one week of an unprecedented heat wave, hundreds of sudden deaths occurred in my province. It’s believed the majority of them were caused by the heat.

We are now at the point where we have to shift the mental algorithms we use when we buy stuff. Our consumer value equation has always been self-centered, based on the calculus of “what’s in it for me?” It was this calculation that made Smith’s Invisible Hand possible.

But we now have to change that behavior and make choices that embrace individual sacrifice. We have to start buying based on “What’s best for us?”

In a recent interview, a climate-change expert said he hoped we would soon see carbon-footprint stickers on consumer products. Given a choice between two pairs of shoes, one that was made with zero environmental impact and one that was made with a total disregard for the planet, he hoped we would choose the former, even if it was more expensive.

I’d like to think that’s true. But I have my doubts. Ethical marketing has been around for some time now, and at best it’s a niche play. According to the Canadian Coalition for Farm Animals, the vast majority of egg buyers in Canada — 98% — buy caged eggs even though we’re aware that the practice is hideously cruel.  We do this because those eggs are cheaper.

The sad fact is that consumers really don’t seem to care about anything other than their own self-interest. We don’t make ethical choices unless we’re forced to by government legislation. And then we bitch like hell about our rights as consumers. “We should be given the choice,” we chant.  “We should have the freedom to decide for ourselves.”

Maybe I’m wrong. I sure hope so. I would like to think — despite recent examples to the contrary of people refusing to wear face masks or get vaccinated despite a global pandemic that took millions of lives — that we can listen to the better angels of our nature and make choices that extend our ability to care beyond our circle of one.

But let’s look at our track record on this. From where I’m sitting, 300 years of continually making bad choices have now brought us to the place where we no longer have the right to make those choices. This is what The Invisible Hand has wrought. We can bitch all we want, but that won’t stop more towns like Lytton B.C. from burning to the ground.

Why Our Brains Struggle With The Threat Of Data Privacy

It seems contradictory. We don’t want to share our personal data but, according to a recent study reported on by MediaPost’s Laurie Sullivan, we want the brands we trust to know us when we come shopping. It seems paradoxical.

But it’s not — really.  It ties in with the way we’ve always been thinking.

Again, we just have to understand that we really don’t understand how the data ecosystem works — at least, not on an instant and intuitive level. Our brains have no evolved mechanisms that deal with new concepts like data privacy. So we have borrowed other parts of the brain that do exist. Evolutionary biologists call this “exaption.”

For example, the way we deal with brands seems to be the same way we deal with people — and we have tons of experience doing that. Some people we trust. Most people we don’t. For the people we trust, we have no problem sharing something of our selves. In fact, it’s exactly that sharing that nurtures relationships and helps them grow.

It’s different with people we don’t trust. Not only do we not share with them, we work to avoid them, putting physical distance between us and them. We’d cross to the other side of the street to avoid bumping into them.

In a world that was ordered and regulated by proximity, this worked remarkably well. Keeping our enemies at arm’s length generally kept us safe from harm.

Now, of course, distance doesn’t mean the same thing it used to. We now maneuver in a world of data, where proximity and distance have little impact. But our brains don’t know that.

As I said, the brain doesn’t really know how digital data ecosystems work, so it does its best to substitute concepts it has evolved to handle those it doesn’t understand at an intuitive level.

The proxy for distance the brain seems to use is task focus. If we’re trying to do something, everything related to that thing is “near” and everything not relevant to it is “far. But this is an imperfect proxy at best and an outright misleading one at worst.

For example, we will allow our data to be collected in order to complete the task. The task is “near.” In most cases, the data we share has little to do with the task we’re trying to accomplish. It is labelled by the brain as “far” and therefore poses no immediate threat.

It’s a bait and switch tactic that data harvesters have perfected. Our trust-warning systems are not engaged because there are no proximate signs to trigger them. Any potential breaches of trust happen well after the fact – if they happen at all. Most times, we’re simply not aware of where our data goes or what happens to it. All we know is that allowing that data to be collected takes us one step closer to accomplishing our task.

That’s what sometimes happens when we borrow one evolved trait to deal with a new situation:  The fit is not always perfect. Some aspects work, others don’t.

And that is exactly what is happening when we try to deal with the continual erosion of online trust. In the moment, our brain is trying to apply the same mechanisms it uses to assess trust in a physical world. What we don’t realize is that we’re missing the warning signs our brains have evolved to intuitively look for.

We also drag this evolved luggage with us when we’re dealing with our favorite brands. One of the reasons you trust your closest friends is that they know you inside and out. This intimacy is a product of a physical world. It comes from sharing the same space with people.

In the virtual world, we expect the brands we know and love to have this same knowledge of us. It frustrates us when we are treated like a stranger. Think of how you would react if the people you love the most gave you the same treatment.

This jury-rigging of our personal relationship machinery to do double duty for the way we deal with brands may sound far-fetched, but marketing brands have only been around for a few hundred years. That is just not enough time for us to evolve new mechanisms to deal with them.

Yes, the rational, “slow loop” part of our brains can understand brands, but the “fast loop” has no “brand” or “data privacy” modules. It has no choice but to use the functional parts it does have.

As I mentioned in a previous post, there are multiple studies that indicate that it’s these parts of our brain that fire instantly, setting the stage for all the rationalization that will follow. And, as our own neuro-imaging study showed, it seems that the brain treats brands the same way it treats people.

I’ve been watching this intersection between technology and human behaviour for a long time now. More often than not, I see this tendency of the brain to make split-section decisions in environments where it just doesn’t have the proper equipment to make those decisions. When we stop to think about these things, we believe we understand them. And we do, but we had to stop to think. In the vast majority of cases, that’s just not how the brain works.