My Many Problems with the Metaverse

I recently had dinner with a comedian who had just did his first gig in the Metaverse. It was in a new Meta-Comedy Club. He was excited and showed me a recording of the gig.

I have to admit, my inner geek thought it was very cool: disembodied hands clapping with avataresque names floating above, bursts of virtual confetti for the biggest laughs and even a virtual-hook that instantly snagged meta-hecklers, banning them to meta-purgatory until they promised to behave. The comedian said he wanted to record a comedy meta-album in the meta-club to release to his meta-followers.

It was all very meta.

As mentioned, as a geek I’m intrigued by the Metaverse. But as a human who ponders our future (probably more than is healthy) – I have grave concerns on a number of fronts. I have mentioned most of these individually in previous posts, but I thought it might be useful to round them up:

Removed from Reality

My first issue is that the Metaverse just isn’t real. It’s a manufactured reality. This is at the heart of all the other issues to come.

We might think we’re clever, and that we can manufacturer a better world than the one that nature has given us, but my response to that would be Orgel’s Second Rule, courtesy of Sir Francis Crick, co-discoverer of DNA: “Evolution is cleverer than you are.”

For millions of years, we have evolved to be a good fit in our natural environment. There are thousands of generations of trial and error baked into our DNA that make us effective in our reality. Most of that natural adaptation lies hidden from us, ticking away below the surface of both our bodies and brains, silently correcting course to keep us aligned and functioning well in our world.

But we, in our never-ending human hubris, somehow believe we can engineer an environment better than reality in less than a single generation. If we take Second Life as the first iteration of the metaverse, we’re barely two decades into the engineering of a meta-reality.

If I was placing bets on who is the better environmental designer for us, humans or evolution, my money would be on evolution, every time.

Who’s Law is It Anyway?

One of the biggest selling features of the Metaverse is that it frees us from the restrictions of geography. Physical distance has no meaning when we go meta.

But this also has issues. Societies need laws and our laws have evolved to be grounded within the boundaries of geographical jurisdictions. What happens when those geographical jurisdictions become meaningless? Right now, there are no laws specifically regulating the Metaverse. And even if there are laws in the future, in what jurisdiction would they be enforced?

This is a troubling loophole – and by hole I mean a massive gaping metaverse-sized void. You know who is attracted by a lack of laws? Those who have no regard for the law. If you don’t think that criminals are currently eyeing the metaverse looking for opportunity, I have a beautiful virtual time-share condo in the heart of meta-Boca Raton that I’d love to sell you.

Data is Matter of the Metaverse

Another “selling feature” for the metaverse is the ability to append metadata to our own experiences, enriching them with access to information and opportunities that would be impossible in the real world. In the metaverse, the world is at our fingertips – or in our virtual headset – as the case may be. We can stroll through worlds, real or imagined, and the sum of all our accumulated knowledge is just one user-prompt away.

But here’s the thing about this admittedly intriguing notion: it makes data a commodity and commodities are built to be exchanged based on market value. In order to get something of value, you have to exchange something of value. And for the builders of the metaverse, that value lies in your personal data. The last shreds of personal privacy protection will be gone, forever!

A For-Profit Reality

This brings us to my biggest problem with the Metaverse – the motivation for building it. It is being built not by philanthropists or philosophers, academics or even bureaucrats. The metaverse is being built by corporations, who have to hit quarterly profit projections. They are building it to make a buck, or, more correctly, several billion bucks.

These are the same people who have made social media addictive by taking the dirtiest secrets of Las Vegas casinos and using them to enslave us through our smartphones. They have toppled legitimate governments for the sake of advertising revenue. They have destroyed our concept of truth, bashed apart the soft guardrails of society and are currently dismantling democracy. There is no noble purpose for a corporation – their only purpose is profit.

Do you really want to put your future reality in those hands?

The Ten Day Tech Detox

I should have gone cold turkey on tech. I really should have.

It would have been the perfect time – should have been the perfect time.

But I didn’t. As I spent 10 days on BC’s gorgeous sunshine coast with family, I also trundled along my assortment of connected gadgets. 

But I will say it was a partially successful detox. I didn’t crack open the laptop as much as I usually do. I generally restricted use of my iPad to reading a book.

But my phone – it was my phone, always within reach, that tempted me with social media’s siren call.

In a podcast, Andrew Selepak, social media professor at the University of Florida, suggests that rather than doing a total detox that is probably doomed to fail, you use vacations as an opportunity to use tech as a tool rather than an addiction.

I will say that for most of the time, that’s what I did. As long as I was occupied with something I was fine. 

Boredom is the enemy. It’s boredom that catches you. And the sad thing was, I really shouldn’t have been bored. I was in one of the most beautiful places on earth. I had the company of people I loved. I saw humpback whales – up close – for Heaven’s sake. If ever there was a time to live in the moment, to embrace the here and now, this was it. 

The problem, I realized, is that we’re not really comfortable any more with empty spaces – whether they be in conversation, in our social life or in our schedule of activities. We feel guilt and anxiety when we’re not doing anything.

It was an interesting cycle. As I decompressed after many weeks of being very busy, the first few days were fine. “I need this,” I kept telling myself. It’s okay just to sit and read a book. It’s okay not to have every half-hour slot of the day meticulously planned to jam as much in as possible.

That lasted about 48 hours. Then I started feeling like I should be doing something. I was uncomfortable with the empty spaces.

The fact is, as I learned – boredom always has been part of the human experience. It’s a feature – not a bug. As I said, boredom represents the empty spaces that allow themselves to be filled with creativity.  Alicia Walf, a neuroscientist and a senior lecturer in the Department of Cognitive Science at Rensselaer Polytechnic Institute, says it is critical for brain health to let yourself be bored from time to time.

“Being bored can help improve social connections. When we are not busy with other thoughts and activities, we focus inward as well as looking to reconnect with friends and family. 

Being bored can help foster creativity. The eureka moment when solving a complex problem when one stops thinking about it is called insight.

Additionally, being bored can improve overall brain health.  During exciting times, the brain releases a chemical called dopamine which is associated with feeling good.  When the brain has fallen into a predictable, monotonous pattern, many people feel bored, even depressed. This might be because we have lower levels of dopamine.”

That last bit, right there, is the clue why our phones are particularly prone to being picked up in times of boredom. Actually, three things are at work here. The first is that our mobile devices let us carry an extended social network in our pockets. In an article from Harvard, this is explained: “Thanks to the likes of Facebook, Snapchat, Instagram, and others, smartphones allow us to carry immense social environments in our pockets through every waking moment of our lives.”

As Walf said, boredom is our brains way of cueing us to seek social interaction. Traditionally, this was us getting the hell out of our cave – or cabin – or castle – and getting some face time with other humans. 

But technology has short circuited that. Now, we get that social connection through the far less healthy substitution of a social media platform. And – in the most ironic twist – we get that social jolt not by interacting with the people we might happen to be with, but by each staring at a tiny little screen that we hold in our hand.

The second problem is that mobile devices are not designed to leave us alone, basking in our healthy boredom. They are constantly beeping, buzzing and vibrating to get our attention. 

The third problem is that – unlike a laptop or even a tablet – mobile devices are our device of choice when we are jonesing for a dopamine jolt. It’s our phones we reach for when we’re killing time in a line up, riding the bus or waiting for someone in a coffee shop. This is why I had a hard time relegating my phone to being just a tool while I was away.

As a brief aside – even the term “killing time” shows how we are scared to death of being bored. That’s a North American saying – boredom is something to be hunted down and eradicated. You know what Italians call it? “Il dolce far niente” – the sweetness of doing nothing. Many are the people who try to experience life by taking endless photos and posting on various feeds, rather than just living it. 

The fact is, we need boredom. Boredom is good, but we are declaring war on it, replacing it with a destructive need to continually bath our brains in the dopamine high that comes from checking our Facebook feed or latest Tiktok reel. 

At least one of the architects of this vicious cycle feels some remorse (also from the article from Harvard). “ ‘I feel tremendous guilt,’ admitted Chamath Palihapitiya, former Vice President of User Growth at Facebook, to an audience of Stanford students. He was responding to a question about his involvement in exploiting consumer behavior. ‘The short-term, dopamine-driven feedback loops that we have created are destroying how society works,’ “

That is why we have to put the phone down and watch the humpback whales. That, miei amici, is il dolci far niente!

With Digital Friends Like These, Who Needs Enemies?

Recently, I received an email from Amazon that began:

“You’re amazing. Really, you’re awesome! Did that make you smile? Good. Alexa is here to compliment you. Just say, ‘Alexa, compliment me’”

“What,” I said to myself, “sorry-assed state is my life in that I need to depend on a little black electronic hockey puck to affirm my self-worth as a human being?”

I realize that the tone of the email likely had tongue at least part way implanted in cheek, but still, seriously – WTF Alexa? (Which, incidentally, Alexa also has covered. Poise that question and Alexa responds – “I’m always interested in feedback.”)

My next thought was, maybe I think this is a joke, but there are probably people out there that need this. Maybe their lives are dangling by a thread and it’s Alexa’s soothing voice digitally pumping their tires that keeps them hanging on until tomorrow. And – if that’s true – should I be the one to scoff at it?

I dug a little further into the question, “Can we depend on technology for friendship, for understanding, even – for love?”

The answer, it turns out, is probably yes.

A few studies have shown that we will share more with a virtual therapist than a human one in a face-to-face setting. We feel heard without feeling judged.

In another study, patients with a virtual nurse ended up creating a strong relationship with it that included:

  • Using close forms of greeting and goodbye
  • Expressing happiness to see the nurse
  • Using compliments
  • Engaging in social chat
  • And expressing a desire to work together and speak with the nurse again

Yet another study found that robots can even build a stronger relationship with us by giving us a pat on the hand or touching our shoulder. We are social animals and don’t do well when we lose that sociability. If we go too long without being touched, we experience something called “skin hunger” and start feeling stressed, depressed and anxious. The use of these robots is being tested in senior’s care facilities to help combat extreme loneliness.

In reading through these studies, I was amazed at how quickly respondents seemed to bond with their digital allies. We have highly evolved mechanisms that determine when and with whom we seem to place trust. In many cases, these judgements are based on non-verbal cues: body language, micro-expressions, even how people smell. It surprised me that when our digital friends presented none of these, the bonds still developed. In fact, it seems they were deeper and stronger than ever!

Perhaps it’s the very lack of humanness that is the explanation. As in the case of the success of a virtual therapist, maybe these relationships work because we can leave the baggage of being human behind. Virtual assistants are there to serve us, not judge or threaten us. We let our guards down and are more willing to open up.

Also, I suspect that the building blocks of these relationships are put in place not by the rational, thinking part of our brains but the emotional, feeling part. It’s been shown that self-affirmation works by activating the reward centers of our brain, the ventral striatum and ventromedial prefrontal cortex. These are not pragmatic, cautious parts of our cognitive machinery. As I’ve said before, they’re all gas and no brakes. We don’t think a friendship with a robot is weird because we don’t think about it at all, we just feel better. And that’s enough.

AI companionship seems a benign – even beneficial use of technology – but what might the unintended consequences be? Are we opening ourselves up to potential dangers by depending on AI for our social contact – especially when the lines are blurred between for-profit motives and affirmation we become dependent on.

In therapeutic use cases of virtual relationships as outlined up to now, there is no “for-profit” motive. But Amazon, Apple, Facebook, Google and the other providers of consumer directed AI companionship are definitely in it for the money. Even more troubling, two of those – Facebook and Google – depend on advertising for their revenue. Much as this gang would love us to believe that they only have our best interests in mind – over $1.2 trillion in combined revenue says otherwise. I suspect they have put a carefully calculated price on digital friendship.

Perhaps it’s that – more than anything – that threw up the red flags when I got that email from Amazon. It sounded like it was coming from a friend, and that’s exactly what worries me.

The Physical Foundations of Friendship

It’s no secret that I worry about what the unintended consequences might be for us as we increasingly substitute a digital world for a physical one. What might happen to our society as we spend less time face-to-face with people and more time face-to-face with a screen?

Take friendship, for example. I have written before about how Facebook friends and real friends are not the same thing. A lot of this has to do with the mental work required to maintain a true friendship. This cognitive requirement led British anthropologist Robin Dunbar to come up with something called Dunbar’s Number – a rough rule-of-thumb that says we can’t really maintain a network of more than 150 friends, give or take a few.

Before you say, “I have way more friends on Facebook than that,” realize that I don’t care what your Facebook Friend count is. Mine numbers at least 3 times more than Dunbar’s 150 limit. But they are not all true friends. Many are just the result of me clicking a link on my laptop. It’s quick, it’s easy, and there is absolutely no requirement to put any skin in the game. Once clicked, I don’t have to do anything to maintain these friendships. They are just part of a digital tally that persists until I might click again, “unfriending” them. Nowhere is the ongoing physical friction that demands the maintenance required to keep a true friendship from slipping into entropy.

So I was wondering – what is that magical physical and mental alchemy that causes us to become friends with someone in the first place? When we share physical space with another human, what is the spark that causes us to want to get to know them better? Or – on the flip side – what are the red flags that cause us to head for the other end of the room to avoid talking to them? Fortunately, there is some science that has addressed those questions.

We become friends because of something in sociology call homophily – being like each other. In today’s world, that leads to some unfortunate social consequences, but in our evolutionary environment, it made sense. It has to do with kinship ties and what ethologist Richard Dawkins called The Selfish Gene. We want family to survive to pass on our genes. The best way to motivate us to protect others is to have an emotional bond to them. And it just so happens that family members tend to look somewhat alike. So we like – or love – others who are like us.

If we tie in the impact of geography over our history, we start to understand why this is so. Geography that restricted travel and led to inbreeding generally dictated a certain degree of genetic “sameness” in our tribe. It was a quick way to sort in-groups from out-groups. And in a bloodier, less politically correct world, this was a matter of survival.

But this geographic connection works both ways. Geographic restrictions lead to homophily, but repeated exposure to the same people also increases the odds that you’ll like them. In psychology, this is called mere-exposure effect.

In these two ways, the limitations of a physical world has a deep, deep impact on the nature of friendship. But let’s focus on the first for a moment. 

It appears we have built-in “friend detectors” that can actually sense genetic similarities. In a rather fascinating study, Nicholas Christakis and James Fowler found that friends are so alike genetically, they could actually be family. If you drill down to the individual building blocks of a gene at the nucleotide level, your friends are as alike genetically to you as your fourth cousin. As Christakis and Fowler say in their study, “friends may be a kind of ‘functional kin’.”

This shows how deeply friendships bonds are hardwired into us. Of course, this doesn’t happen equally across all genes. Evolution is nothing if not practical. For example, Christakis and Fowler found that specific systems do stay “heterophilic” (not alike) – such as our immune system. This makes sense. If you have a group of people who stay in close proximity to each other, it’s going to remain more resistant to epidemics if there is some variety in what they’re individually immune to. If everyone had exactly the same immunity profile, the group would be highly resistant to some bugs and completely vulnerable to others. It would be putting all your disease prevention eggs in one basket.

But in another example of extreme genetic practicality, how similar we smell to our friends can be determined genetically.  Think about it. Would you rather be close to people who generally smell the same, or those that smell different? It seems a little silly in today’s world of private homes and extreme hygiene, but when you’re sharing very close living quarters with others and there’s no such thing as showers and baths, how everyone smells becomes extremely important.

Christakis and Fowler found that our olfactory sensibilities tend to trend to the homophilic side between friends. In other words, the people we like smell alike. And this is important because of something called olfactory fatigue. We use smell as a difference detector. It warns us when something is not right. And our nose starts to ignore smells it gets used to, even offensive ones. It’s why you can’t smell your own typical body odor. Or, in another even less elegant example, it’s why your farts don’t stink as much as others. 

Given all this, it would make sense that if you had to spend time close to others, you would pick people who smelled like you. Your nose would automatically be less sensitive to their own smells. And that’s exactly what a new study from the Weizmann Institute of Science found. In the study, the scent signatures of complete strangers were sampled using an electronic sniffer called an eNose. Then the strangers were asked to engage in nonverbal social interactions in pairs. After, they were asked to rate each interaction based on how likely they would be to become friends with the person. The result? Based on their smells alone, the researchers were able to predict with 71% accuracy who would become friends.

The foundations of friendship run deep – down to the genetic building blocks that make us who we are. These foundations were built in a physical world over millions of years. They engage senses that evolved to help us experience that physical world. Those foundations are not going to disappear in the next decade or two, no matter how addictive Facebook or TikTok becomes. We can continue to layer technology over these foundations, but to deny them it to ignore human nature.

Don’t Be Too Quick To Dismiss The Metaverse

According to my fellow Media Insider Maarten Albarda, the metaverse is just another in a long line of bright shiny objects that — while promising to change the world of marketing — will probably end up on the giant waste heap of overhyped technologies.

And if we restrict Maarten’s caution to specifically the metaverse and its impact on marketing, perhaps he’s right. But I think this might be a case of not seeing the forest for the trees.

Maarten lists a number of other things that were supposed to revolutionize our lives: Clubhouse, AI, virtual reality, Second Life. All seemed to amount to much ado about nothing.

But as I said almost 10 years ago, when I first started talking about one of those overhyped examples, Google Glass — and what would eventually become the “metaverse” (in rereading this, perhaps I’m better at predictions than I thought)  — the overall direction of these technologies do mark a fundamental shift:

“Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the ‘cloud.’ It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections.”

As Wired founder and former executive editor Kevin Kelly has told us, technology knows what it wants. Eventually, it gets it. Sooner or later, all these things are bumping up against a threshold that will mark a fundamental shift in how we live.

You may call this the long awaited “singularity” or not. Regardless, it does represent a shift from technology being a tool we use consciously to enhance our experiences, to technology being so seamlessly entwined with our reality that it alters our experiences without us even being aware of it. We’re well down this path now, but the next decade will move us substantially further, beyond the point of no return.

And that will impact everything, including marketing.

What is interesting is the layer technology is building over the real world, hence the term “meta.” It’s a layer of data and artificial intelligence that will fundamentally alter our interactions with that world. It’s technology that we may not use intentionally — or, beyond the thin layer of whatever interface we use, may not even be aware of.

This is what makes it so different from what has come before. I can think of no technical advance in the past that is so consequential to us personally yet functions beyond the range of our conscious awareness or deliberate usage. The eventual game-changer might not be the metaverse. But a change is coming, and the metaverse is a signal of that.

Technology advancing is like the tide coming in. If you watch the individual waves coming in, they don’t seem to amount to much. One stretches a little higher than the last, followed by another that fizzles out at the shoreline. But cumulatively, they change the landscape — forever. This tide is shifting humankind’s relationship with technology. And there will be no going back.

Maybe Maarten is right. Maybe the metaverse will turn out to be a big nothingburger. But perhaps, just perhaps, the metaverse might be the Antonio Meucci  of our time: an example where the technology was inevitable, but the timing wasn’t quite right.

Meucci was an Italian immigrant who started working on the design of a workable telephone in 1849, a full two decades before Alexander Graham Bell even started experimenting with the concept.  Meucci filed a patent caveat in 1871, five years before Bell’s patent application was filed, but was destitute and didn’t have the money to renew it.  His wave of technological disruption may have hit the shore a little too early, but that didn’t diminish the significance of the telephone, which today is generally considered one of the most important inventions  of all time in terms of its impact on humanity.

Whatever is coming, and whether or not the metaverse represents the sea change catalyst that alters everything, I fully expect at some point in the very near future to pinpoint this time as the dawn of the technological shift that made the introduction of the telephone seem trivial in comparison.

The Unusual Evolution of the Internet

The Internet we have today evolved out of improbability. It shouldn’t have happened like it did. It evolved as a wide-open network forged by starry-eyed academics and geeks who really believed it might make the world better. It wasn’t supposed to win against walled gardens like Compuserve, Prodigy and AOL — but it did. If you rolled back the clock, knowing what we know now, you could be sure it would never play out the same way again.

To use the same analogy that Eric Raymond did in his now-famous essay on the development of Linux, these were people who believed in bazaars rather than cathedrals. The internet was cobbled together to scratch an intellectual and ethical itch, rather than a financial one.

But today, as this essay in The Atlantic by Jonathan Zittrain warns us, the core of the internet is rotting. Because it was built by everyone and no one, all the superstructure that was assembled on top of that core is teetering. Things work, until they don’t: “The internet was a recipe for mortar, with an invitation for anyone, and everyone, to bring their own bricks.”

The problem is, it’s no one’s job to make sure those bricks stay in place.

Zittrain talks about the holes in humanity’s store of knowledge. But there’s another thing about this evolution that is either maddening or magical, depending on your perspective: It was never built with a business case in mind.

Eventually, commerce pipes were retrofitted into the whole glorious mess, and billions managed to be made. Google alone has managed to pull over a trillion dollars in revenue in less than 20 years by becoming the de facto index to the world’s most haphazard library of digital stuff. Amazon went one better, using the Internet to reinvent humanity’s marketplace and pulling in $2 trillion in revenue along the way.

But despite all this massive monetization, the benefactors still at least had to pay lip service to that original intent: the naïve belief that technology could make us better, and  that it didn’t just have to be about money.

Even Google, which is on its way to posting $200 billion in revenue, making it the fifth biggest media company in the world (after Netflix, Disney, Comcast, and AT&T), stumbled on its way to making a buck. Perhaps it’s because its founders, Larry Page and Sergey Brin, didn’t trust advertising. In their original academic paper, they said that “advertising-funded search engines will inherently be biased toward the advertisers and away from the needs of consumers.”  Of course they ultimately ended up giving in to the dark side of advertising. But I watched the Google user experience closely from 2003 to 2011, and that dedication to the user was always part of a delicate balancing act that was generally successful.

But that innocence of the original Internet is almost gone, as I noted in a recent post. And there are those who want to make sure that the next thing — whatever it is — is built on a framework that has monetization built in. It’s why Mark Zuckerberg is feverishly hoping that his company can build the foundations of the Metaverse. It’s why Google is trying to assemble the pipes and struts that build the new web. Those things would be completely free of the moral — albeit naïve — constraints that still linger in the original model. In the new one, there would only be one goal: making sure shareholders are happy.

It’s also natural that many of those future monetization models will likely embrace advertising, which is, as I’ve said before, the path of least resistance to profitability.

We should pay attention to this. The very fact that the Internet’s original evolution was as improbable and profit-free as it was puts us in a unique position today. What would it look like if things had turned out differently, and the internet had been profit-driven from day one? I suspect it might have been better-maintained but a lot less magical, at least in its earliest iterations.

Whatever that new thing is will form a significant part of our reality. It will be even more foundational and necessary to us than the current internet. We won’t be able to live without it. For that reason, we should worry about the motives that may lie behind whatever “it” will be.

Adrift in the Metaverse

Humans are nothing if not chasers of bright, shiny objects. Our attention is always focused beyond the here and now. That is especially true when here and now is a bit of a dumpster fire.

The ultrarich know that this is part of the human psyche, and they are doubling down their bets on it. Jeff Bezos and Elon Musk are betting on space. But others — including Mark Zuckerberg — are betting on something called the metaverse.

Just this past summer, Zuck told his employees about his master plan for Facebook:

“Our overarching goal across all of (our) initiatives is to help bring the metaverse to life.”

So what exactly is the metaverse? According to Wikipedia, it is

“a collective virtual shared space, created by the convergence of virtually enhanced physical reality and physically persistent virtual space, including the sum of all virtual worlds, augmented reality, and the Internet.”

The metaverse is a world of our own making, which exists in the dimensions of a digital reality. There we imagine we can fix what we screwed up in the maddeningly unpredictable real world. It is the ultimate in bright, shiny objects.

Science fiction and the entertainment industry have been toying with the idea of the metaverse for some time now. The term itself comes from Neal Stephenson’s 1992 novel “Snow Crash.” It has been given the Hollywood treatment numerous times, notably in “The Matrix” and “Ready Player One.” But Silicon Valley venture capitalists are rushing to make fiction into fact.

You can’t really blame us for throwing in the towel on the world we have systematically wrecked. There are few glimmers of hope out there in the real world. What we have wrought is painful to contemplate. So we are doing what we’ve always done, reach for what we want rather than fix what we have. Take the Reporters Without Borders Uncensored Library, for example.

There are many places in the real world where journalism is censored, like Russia, the Middle East, Vietnam and China. But in the metaverse, there is the option of leapfrogging over all the political hurdles we stumble over in the real world. So Reporters without Borders and two German creative agencies built a meta library in the meta world of Minecraft. Here, censored articles are made into virtual books, accessible to all who want to check them out.

It’s hard to find fault with this. Censorship is a tool of oppression. Here, a virtual world offered an inviting loophole to circumvent it. The metaverse came to the rescue. What is the problem with that?

The biggest risk is this: We weren’t built for the metaverse. We can probably adapt to it, somewhat, but everything that makes us tick has evolved in a flesh and blood world, and — to quote a line from Joni Mitchell’s “Big Yellow Taxi,” “You don’t know what you’ve got till it’s gone.”

It’s fair to say that right now the metaverse is a novelty. Most of your neighbors, friends and family have never heard of it. But odds are it will become our life. In a 2019 article called “Welcome to the Mirror World” in Wired, Kevin Kelley explained, “we are building a 1-to-1 map of almost unimaginable scope. When it’s complete, our physical reality will merge with the digital universe.”

In a Forbes article, futurist Cathy Hackl gives us an example of what this merger might look like:

“Imagine walking down the street. Suddenly, you think of a product you need. Immediately next to you, a vending machine appears, filled with the product and variations you were thinking of. You stop, pick an item from the vending machine, it’s shipped to your house, and then continue on your way.”

That sounds benign — even helpful. But if we’ve learned one thing it’s this: When we try to merge technology with human behavior, there are always unintended consequences that arise. And when we’re talking about the metaverse, those consequences will likely be massive.

It is hubristic in the extreme to imagine we can engineer a world that will be a better match for our evolved humanware mechanics than the world we actually evolved within. It’s sheer arrogance to imagine we can build that world, and also arrogant to imagine that we can thrive within it.

We have a bright, shiny bias built into us that will likely lead us to ignore the crumbling edifice of our reality. German futurist Gerd Leonhard, for one, warns us about an impending collision between technology and humanity:

“Technology is not what we seek but how we seek: the tools should not become the purpose. Yet increasingly, technology is leading us to ‘forget ourselves.’”

Imagine a Pandemic without Technology

As the writer of a weekly post that tends to look at the intersection between human behavior and technology, the past 18 months have been interesting – and by interesting, I mean a twisted ride through gut-wrenching change unlike anything I have ever seen before.

I can’t even narrow it down to 18 months. Before that, there was plenty more that was “unprecedented” – to berrypick a word from my post from a few weeks back. I have now been writing for MediaPost in one place or another for 17 years. My very first post was on August 19, 2004. That was 829 posts ago. If you add the additional posts I’ve done for my own blog – outofmygord.com – I’ve just ticked over 1,100 on my odometer.  That’s a lot of soul searching about technology. And the last several months have still been in a class by themselves.

Now, part of this might be where my own head is at. Believe it or not, I do sometimes try to write something positive. But as soon as my fingers hit the keyboard, things seem to spiral downwards. Every path I take seems to take me somewhere dark. There has been precious little that has sparked optimism in my soul.

Today, for example, prior to writing this, I took three passes at writing something else. Each quickly took a swerve towards impending doom. I’m getting very tired of this. I can only imagine how you feel, reading it.

So I finally decided to try a thought experiment. “What if,” I wondered, “we had gone through the past 17 months without the technology we take for granted? What if there was no Internet, no computers, no mobile devices? What if we had lived through the Pandemic with only the technology we had – say – a hundred years ago, during the global pandemic of the Spanish Flu starting in 1918? Perhaps the best way to determine the sum total contribution of technology is to do it by process of elimination.”

The Cons

Let’s get the negatives out of the way. First, you might say that technology enabled the flood of misinformation and conspiracy theorizing that has been so top-of-mind for us. Well, yes – and no.

Distrust in authority is nothing new. It’s always been there, at one end of a bell curve that spans the attitudes of our society. And nothing brings the outliers of society into global focus faster than a crisis that affects all of us.

There was public pushback against the very first vaccine ever invented; the smallpox vaccine. Now granted, the early method was to rub puss from a cowpox blister into a cut in your skin and hope for the best. But it worked. Smallpox is now a thing of the past.

And, if we are talking about pushback against public health measures, that’s nothing new either. Exactly the same thing happened during the 1918-1919 Pandemic. Here’s one eerily familiar excerpt from a journal article looking at the issue, “Public-gathering bans also exposed tensions about what constituted essential vs. unessential activities. Those forced to close their facilities complained about those allowed to stay open. For example, in New Orleans, municipal public health authorities closed churches but not stores, prompting a protest from one of the city’s Roman Catholic priests.”

What is different, thanks to technology, is that public resistance is so much more apparent than it’s ever been before. And that resistance is coming with faces and names we know attached. People are posting opinions on social media that they would probably never say to you in a face-to-face setting, especially if they knew you disagreed with them. Our public and private discourse is now held at arms-length by technology. Gone are all the moderating effects that come with sharing the same physical space.

The Pros

Try as I might, I couldn’t think of another “con” that technology has brought to the past 17 months. The “pro” list, however, is far too long to cover in this post, so I’ll just mention a few that come immediately to mind.

Let’s begin with the counterpoint to the before-mentioned “Con” – the misinformation factor. While misinformation was definitely spread over the past year and a half, so was reliable, factual information. And for those willing to pay attention to it, it enabled us to find out what we needed to in order to practice public health measures at a speed previously unimagined. Without technology, we would have been slower to act and – perhaps – fewer of us would have acted at all. At worst, in this case technology probably nets out to zero.

But technology also enabled the world to keep functioning, even if it was in a different form. Working from home would have been impossible without it. Commercial engines kept chugging along. Business meetings switched to online platforms. The Dow Jones Industrial Average, as of the writing of this, is over 20% higher than it was before the pandemic. In contrast, if you look at stock market performance over the 1918 – 1919 pandemic, the stock market was almost 32% lower at the end of the third wave as it was at the start of the first. Of course, there are other factors to consider, but I suspect we can thank technology for at least some of that.

It’s easy to point to the negatives that technology brings, but if you consider it as a whole, technology is overwhelmingly a blessing.

What was interesting to me in this thought experiment was how apparent it was that technology keeps the cogs of our society functioning more effectively, but if there is a price to be paid, it typically comes at the cost of our social bonds.

Picking Apart the Concept of Viral Videos

In case you’re wondering, the most popular video on YouTube is the toxic brain worm Baby Shark Dance. It has over 8.2 billion views.

And from that one example, we tend to measure everything that comes after.  Digital has screwed up our idea of what it means to go viral. We’re not happy unless we get into the hyper-inflated numbers typical of social media influencers. Maybe not Baby Shark numbers, but definitely in the millions.

But does that mean that something that doesn’t hit these numbers is a failure? An old stat I found said that over half of YouTube videos have less than 500 views. I couldn’t find a more recent tally, but I suspect that’s still true.

And, if it is, my immediate thought is that those videos must suck. They weren’t worth sharing. They didn’t have what it takes to go viral. They are forever stuck in the long, long tail of YouTube wannabes.

But is going viral all it’s cracked up to be?

Let’s do a little back-of-an-envelope comparison. A week and a half ago, I launched a video that has since gotten about 1,500 views. A few days ago, a YouTuber named MrBeast launched a video titled, “I Spent 50 Hours Buried Alive.” In less than 24 hours, it racked up over 30 million views. Compared to that, one might say my launch was a failure. But was it?  It depends on what your goals for a video are. And it also depends on the structure of social networks.

Social networks are built of nodes. Within the node, people are connected by strong ties. They have a lot in common. But nodes are often connected by weak ties. These bonds stretch across groups that have less in common. Understanding this structure is important in understanding how a video might spread through a network.

Depending on your video’s content, it may never move beyond one node. It may not have the characteristics necessary to get passed along the ties that connect separate nodes. This was something I explored many years ago when I looked at how rumors spread through social networks. In that post, I talked about a study by Frenzen and Nakamoto that looked at some of the variables required to make a rumor spread between nodes.

Some of the same dynamics hold true when we look at viral videos. If you’ve had less than 500 views, as apparently over 50% of YouTube videos do, chances are you got stuck in a node. But this might not be a bad thing. Sometimes going deep is better than going wide.

My video, for example, is definitely aimed at one particular audience, people of Italian descent in the region where I live. According to the latest government census, the total possible “target” for my video is probably less than 10,000 people. And, if this is the case, I’ve already reached 15% of my audience. That’s not a mind-blowing success record, but it’s a start.

My goal for the video was to ignite an interest in my audience to learn more about their own heritage. And it seems to be working. I’ve never seen more interest in people wanting to learn about their own ancestors in particular, or the story of Italians in the Okanagan region of British Columbia in general.

My goal was never to just get a like or even a share, although that would be nice. My goal was to move people enough to act. I wanted to go deep, not wide.

To go “deep,” you have to fully leverage those “strong ties.” What is the stuff those ties are made of? What is the common ground within the node? The things that make people watch all 13-and-a-half minutes of a video about Italian immigrants are the very same things that will keep it stuck within that particular node. As long as it stays there, it will be interesting and relevant. But it won’t jump across a weak tie, because there is no common ground to act as a launching pad.

If the goal is to go “wide” and set a network effect in motion, then you have to play to the lowest common denominator: those universal emotions that we all share, which can be ignited just long enough to capture a quick view and a social share. According to this post about how to go viral, they are: status, identity protection, being helpful, safety, order, novelty, validation and voyeurism.

Another way to think of it is this: Do you want your content to trigger “fast” thinking or “slow” thinking? Again, I use Nobel laureate Daniel Kahneman’s cognitive analogy about how the brain works at two levels: fast and slow. If you want your content to “go wide,” you want to trigger the “fast” circuits of the brain. If you want your content to “go deep,” you’re looking to activate the “slow” circuits. It doesn’t mean that “deep” content can’t be emotionally charged. The opposite is often true. But these are emotions that require some cognitive focus and mindfulness, not a hair-trigger reaction. And, if you’re successful, that makes them all the more powerful. These are emotions that serve their inherent purpose. They move us to action.

I think this whole idea of going “viral” suffers from the same hyper-inflation of expectations that seems to affect everything that goes digital. We are naturally comparative and competitive animals, and the world that’s gone viral tends to focus us on quantity rather than quality. We can’t help looking at trending YouTube videos and hoping that our video will get launched into the social sharing stratosphere.

But that doesn’t mean a video that stays stuck with a few hundred views didn’t do its job. Maybe the reason the numbers are low is that the video is doing exactly what it was intended to do.

COVID And The Chasm Crossing

For most of us, it’s been a year living with the pandemic. I was curious what my topic was a year ago this week. It was talking about the brand crisis at a certain Mexican brewing giant when its flagship brand was suddenly and unceremoniously linked with a global pandemic. Of course, we didn’t know then just how “global” it would be back then.

Ahhh — the innocence of early 2020.

The past year will likely be an historic inflection point in many societal trend lines. We’re not sure at this point how things will change, but we’re pretty sure they will change. You can’t take what has essentially been a 12-month anomaly in everything we know as normal, plunk it down on every corner of the globe and expect everything just to bounce back to where it was.

If I could vault 10 years in the future and then look back at today, I suspect I would be talking about how our relationship with technology changed due to the pandemic. Yes, we’re all sick of Zoom. We long for the old days of actually seeing another face in the staff lunchroom. And we realize that bingeing “Emily in Paris” on Netflix comes up abysmally short of the actual experience of stepping in dog shit as we stroll along the Seine.

C’est la vie.

But that’s my point. For the past 12 months, these watered-down digital substitutes have been our lives. We were given no choice. And some of it hasn’t sucked. As I wrote last week, there are times when a digital connection may actually be preferable to a physical one.

There is now a whole generation of employees who are considering their work-life balance in the light of being able to work from home for at least part of the time. Meetings the world over are being reimagined, thanks to the attractive cost/benefit ratio of being able to attend virtually. And, for me, I may have permanently swapped riding my bike trainer in my basement for spin classes in the gym. It took me a while to get used to it, but now that I have, I think it will stick.

Getting people to try something new — especially when it’s technology — is a tricky process. There are a zillion places on the uphill slope of the adoption curve where we can get mired and give up. But, as I said, that hasn’t been an option for us in the past 12 months. We had to stick it out. And now that we have, we realize we like much of what we were forced to adopt. All we’re asking for is the freedom to pick and choose what we keep and what we toss away.

I suspect  many of us will be a lot more open to using technology now that we have experienced the tradeoffs it entails between effectiveness and efficiency. We will make more room in our lives for a purely utilitarian use of technology, stripped of the pros and cons of “bright shiny object” syndrome.

Technology typically gets trapped at both the dread and pseudo-religious devotion ends of the Everett Rogers Adoption Curve. Either you love it, or you hate it. Those who love it form the market that drives the development of our technology, leaving those who hate it further and further behind.

As such, the market for technology tends to skew to the “gee whiz” end of the market, catering to those who buy new technology just because it’s new and cool. This bias has embedded an acceptance of planned obsolescence that just seems to go hand-in-hand with the marketing of technology. 

My previous post about technology leaving seniors behind is an example of this. Even if seniors start out as early adopters, the perpetual chase of the bright shiny object that typifies the tech market can leave them behind.

But COVID-19 changed all that. It suddenly forced all of us toward the hump that lies in the middle of the adoption curve. It has left the world no choice but to cross the “chasm” that  Geoffrey Moore wrote about 30 years ago in his book “Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers.” He explained that the chasm was between “visionaries (early adopters) and pragmatists (early majority),” according to Wikipedia.

This has some interesting market implications. After I wrote my post, a few readers reached out saying they were working on solutions that addressed the need of seniors to stay connected with a device that is easier for them to use and is not subject to the need for constant updating and relearning. Granted, neither of them was from Apple nor Google, but at least someone was thinking about it.

As the pandemic forced the practical market for technology to expand, bringing customers who had everyday needs for their technology, it created more market opportunities. Those opportunities create pockets of profit that allow for the development of tools for segments of the market that used to be ignored.

It remains to be seen if this market expansion continues after the world returns to a more physically based definition of normal. I suspect it will.

This market evolution may also open up new business model opportunities — where we’re actually willing to pay for online services and platforms that used to be propped up by selling advertising. This move alone would take technology a massive step forward in ethical terms. We wouldn’t have this weird moral dichotomy where marketers are grieving the loss of data (as fellow Media Insider Ted McConnell does in this post) because tech is finally stepping up and protecting our personal privacy.

Perhaps — I hope — the silver lining in the past year is that we will look at technology more as it should be: a tool that’s used to make our lives more fulfilling.