Picking Apart the Concept of Viral Videos

In case you’re wondering, the most popular video on YouTube is the toxic brain worm Baby Shark Dance. It has over 8.2 billion views.

And from that one example, we tend to measure everything that comes after.  Digital has screwed up our idea of what it means to go viral. We’re not happy unless we get into the hyper-inflated numbers typical of social media influencers. Maybe not Baby Shark numbers, but definitely in the millions.

But does that mean that something that doesn’t hit these numbers is a failure? An old stat I found said that over half of YouTube videos have less than 500 views. I couldn’t find a more recent tally, but I suspect that’s still true.

And, if it is, my immediate thought is that those videos must suck. They weren’t worth sharing. They didn’t have what it takes to go viral. They are forever stuck in the long, long tail of YouTube wannabes.

But is going viral all it’s cracked up to be?

Let’s do a little back-of-an-envelope comparison. A week and a half ago, I launched a video that has since gotten about 1,500 views. A few days ago, a YouTuber named MrBeast launched a video titled, “I Spent 50 Hours Buried Alive.” In less than 24 hours, it racked up over 30 million views. Compared to that, one might say my launch was a failure. But was it?  It depends on what your goals for a video are. And it also depends on the structure of social networks.

Social networks are built of nodes. Within the node, people are connected by strong ties. They have a lot in common. But nodes are often connected by weak ties. These bonds stretch across groups that have less in common. Understanding this structure is important in understanding how a video might spread through a network.

Depending on your video’s content, it may never move beyond one node. It may not have the characteristics necessary to get passed along the ties that connect separate nodes. This was something I explored many years ago when I looked at how rumors spread through social networks. In that post, I talked about a study by Frenzen and Nakamoto that looked at some of the variables required to make a rumor spread between nodes.

Some of the same dynamics hold true when we look at viral videos. If you’ve had less than 500 views, as apparently over 50% of YouTube videos do, chances are you got stuck in a node. But this might not be a bad thing. Sometimes going deep is better than going wide.

My video, for example, is definitely aimed at one particular audience, people of Italian descent in the region where I live. According to the latest government census, the total possible “target” for my video is probably less than 10,000 people. And, if this is the case, I’ve already reached 15% of my audience. That’s not a mind-blowing success record, but it’s a start.

My goal for the video was to ignite an interest in my audience to learn more about their own heritage. And it seems to be working. I’ve never seen more interest in people wanting to learn about their own ancestors in particular, or the story of Italians in the Okanagan region of British Columbia in general.

My goal was never to just get a like or even a share, although that would be nice. My goal was to move people enough to act. I wanted to go deep, not wide.

To go “deep,” you have to fully leverage those “strong ties.” What is the stuff those ties are made of? What is the common ground within the node? The things that make people watch all 13-and-a-half minutes of a video about Italian immigrants are the very same things that will keep it stuck within that particular node. As long as it stays there, it will be interesting and relevant. But it won’t jump across a weak tie, because there is no common ground to act as a launching pad.

If the goal is to go “wide” and set a network effect in motion, then you have to play to the lowest common denominator: those universal emotions that we all share, which can be ignited just long enough to capture a quick view and a social share. According to this post about how to go viral, they are: status, identity protection, being helpful, safety, order, novelty, validation and voyeurism.

Another way to think of it is this: Do you want your content to trigger “fast” thinking or “slow” thinking? Again, I use Nobel laureate Daniel Kahneman’s cognitive analogy about how the brain works at two levels: fast and slow. If you want your content to “go wide,” you want to trigger the “fast” circuits of the brain. If you want your content to “go deep,” you’re looking to activate the “slow” circuits. It doesn’t mean that “deep” content can’t be emotionally charged. The opposite is often true. But these are emotions that require some cognitive focus and mindfulness, not a hair-trigger reaction. And, if you’re successful, that makes them all the more powerful. These are emotions that serve their inherent purpose. They move us to action.

I think this whole idea of going “viral” suffers from the same hyper-inflation of expectations that seems to affect everything that goes digital. We are naturally comparative and competitive animals, and the world that’s gone viral tends to focus us on quantity rather than quality. We can’t help looking at trending YouTube videos and hoping that our video will get launched into the social sharing stratosphere.

But that doesn’t mean a video that stays stuck with a few hundred views didn’t do its job. Maybe the reason the numbers are low is that the video is doing exactly what it was intended to do.

COVID And The Chasm Crossing

For most of us, it’s been a year living with the pandemic. I was curious what my topic was a year ago this week. It was talking about the brand crisis at a certain Mexican brewing giant when its flagship brand was suddenly and unceremoniously linked with a global pandemic. Of course, we didn’t know then just how “global” it would be back then.

Ahhh — the innocence of early 2020.

The past year will likely be an historic inflection point in many societal trend lines. We’re not sure at this point how things will change, but we’re pretty sure they will change. You can’t take what has essentially been a 12-month anomaly in everything we know as normal, plunk it down on every corner of the globe and expect everything just to bounce back to where it was.

If I could vault 10 years in the future and then look back at today, I suspect I would be talking about how our relationship with technology changed due to the pandemic. Yes, we’re all sick of Zoom. We long for the old days of actually seeing another face in the staff lunchroom. And we realize that bingeing “Emily in Paris” on Netflix comes up abysmally short of the actual experience of stepping in dog shit as we stroll along the Seine.

C’est la vie.

But that’s my point. For the past 12 months, these watered-down digital substitutes have been our lives. We were given no choice. And some of it hasn’t sucked. As I wrote last week, there are times when a digital connection may actually be preferable to a physical one.

There is now a whole generation of employees who are considering their work-life balance in the light of being able to work from home for at least part of the time. Meetings the world over are being reimagined, thanks to the attractive cost/benefit ratio of being able to attend virtually. And, for me, I may have permanently swapped riding my bike trainer in my basement for spin classes in the gym. It took me a while to get used to it, but now that I have, I think it will stick.

Getting people to try something new — especially when it’s technology — is a tricky process. There are a zillion places on the uphill slope of the adoption curve where we can get mired and give up. But, as I said, that hasn’t been an option for us in the past 12 months. We had to stick it out. And now that we have, we realize we like much of what we were forced to adopt. All we’re asking for is the freedom to pick and choose what we keep and what we toss away.

I suspect  many of us will be a lot more open to using technology now that we have experienced the tradeoffs it entails between effectiveness and efficiency. We will make more room in our lives for a purely utilitarian use of technology, stripped of the pros and cons of “bright shiny object” syndrome.

Technology typically gets trapped at both the dread and pseudo-religious devotion ends of the Everett Rogers Adoption Curve. Either you love it, or you hate it. Those who love it form the market that drives the development of our technology, leaving those who hate it further and further behind.

As such, the market for technology tends to skew to the “gee whiz” end of the market, catering to those who buy new technology just because it’s new and cool. This bias has embedded an acceptance of planned obsolescence that just seems to go hand-in-hand with the marketing of technology. 

My previous post about technology leaving seniors behind is an example of this. Even if seniors start out as early adopters, the perpetual chase of the bright shiny object that typifies the tech market can leave them behind.

But COVID-19 changed all that. It suddenly forced all of us toward the hump that lies in the middle of the adoption curve. It has left the world no choice but to cross the “chasm” that  Geoffrey Moore wrote about 30 years ago in his book “Crossing the Chasm: Marketing and Selling High-Tech Products to Mainstream Customers.” He explained that the chasm was between “visionaries (early adopters) and pragmatists (early majority),” according to Wikipedia.

This has some interesting market implications. After I wrote my post, a few readers reached out saying they were working on solutions that addressed the need of seniors to stay connected with a device that is easier for them to use and is not subject to the need for constant updating and relearning. Granted, neither of them was from Apple nor Google, but at least someone was thinking about it.

As the pandemic forced the practical market for technology to expand, bringing customers who had everyday needs for their technology, it created more market opportunities. Those opportunities create pockets of profit that allow for the development of tools for segments of the market that used to be ignored.

It remains to be seen if this market expansion continues after the world returns to a more physically based definition of normal. I suspect it will.

This market evolution may also open up new business model opportunities — where we’re actually willing to pay for online services and platforms that used to be propped up by selling advertising. This move alone would take technology a massive step forward in ethical terms. We wouldn’t have this weird moral dichotomy where marketers are grieving the loss of data (as fellow Media Insider Ted McConnell does in this post) because tech is finally stepping up and protecting our personal privacy.

Perhaps — I hope — the silver lining in the past year is that we will look at technology more as it should be: a tool that’s used to make our lives more fulfilling.

To Be There – Or Not To Be There

According to Eventbrite, hybrid events are the hottest thing for 2021. So I started thinking, what would that possibly look like, as a planner or a participant?

The interesting thing about hybrid events is that they force us to really think about how we experience things. What process do we go through when we let the outside world in? What do we lose if we do that virtually? What do we gain, if anything? And, more importantly, how do we connect with other people during those experiences?

These are questions we didn’t think much about even a year ago. But today, in a reality that’s trying to straddle both the physical and virtual worlds, they are highly relevant to how we’ll live our lives in the future.

The Italian Cooking Lesson

First, let’s try a little thought experiment.

In our town, the local Italian Club — in which both my wife and I are involved — offered cooking lessons before we were all locked down. Groups of eight to 12 people would get together with an exuberant Italian chef in a large commercial kitchen, and together they would make an authentic dish like gnocchi or ravioli. There was a little vino, a little Italian culture and a lot of laughter. These classes were a tremendous hit.

That all ended last March. But we hope to we start thinking about offering them again late in 2021 or 2022. And, if we do, would it make sense to offer them as a “hybrid” event, where you can participate in person or pick up a box of preselected ingredients and follow along in your own kitchen?

As an event organizer, this would be tempting. You can still charge the full price for physical attendance where you’re restricted to 12 people, but you could create an additional revenue stream by introducing a virtual option that could involve as many people as possible. Even at a lower registration fee, it would still dramatically increase revenue at a relatively small incremental cost. It would be “molto” profitable.

But now consider this as an attendee.Would you sign up for a virtual event like that? If you had no other option to experience it, maybe. But what if you could actually be there in person? Then what? Would you feel relegated to a second-class experience by being isolated in your own kitchen, without many of the sensory benefits that go along with the physical experience?

The Psychology of Zoom Fatigue

When I thought about our cooking lesson example, I was feeling less than enthused. And I wondered why.

It turns out that there’s some actual brain science behind my digital ennui. In an article in the Psychiatric Times, Jena Lee, MD, takes us on a “Neuropsychological Exploration of Zoom Fatigue.”

A decade ago, I was writing a lot about how we balance risk and reward. I believe that a lot of our behaviors can be explained by how we calculate the dynamic tension between those two things. It turns out that it may also be at the root of how we feel about virtual events. Dr. Lee explains,

“A core psychological component of fatigue is a rewards-costs trade-off that happens in our minds unconsciously. Basically, at every level of behavior, a trade-off is made between the likely rewards versus costs of engaging in a certain activity.”

Let’s take our Italian cooking class again. Let’s imagine we’re there in person. For our brain, this would hit all the right “reward” buttons that come with being physically “in the moment.” Subconsciously, our brains would reward us by releasing oxytocin and dopamine along with other “pleasure” neurochemicals that would make the experience highly enjoyable for us. The cost/reward calculation would be heavily weighted toward “reward.”

But that’s not the case with the virtual event. Yes, it might still be considered “rewarding,” but at an entirely different — and lesser — scale of the same “in-person” experience. In addition, we would have the additional costs of figuring out the technology required, logging into the lesson and trying to follow along. Our risk/reward calculator just might decide the tradeoffs required weren’t worth it.

Without me even knowing it, this was the calculation that was going on in my head that left me less than enthused.

 But there is a flip side to this.

Reducing the Risk Virtually

Last fall, a new study from Oracle in the U.K. was published with the headline, “82% of People Believe Robots Can Support Their Mental Health Better than Humans.”

Something about that just didn’t seem right to me. How could this be? Again, we had the choice between virtual and physical connection, and this time the odds were overwhelmingly in favor of the virtual option.

But when I thought about it in terms of risk and reward, it suddenly made sense. Talking about our own mental health is a high-risk activity. It’s sad to say, but opening up to your manager about job-related stress could get you a sympathetic ear, or it could get you fired. We are taking baby steps towards destigmatizing mental health issues, but we’re at the beginning of a very long journey.

In this case, the risk/reward calculation is flipped completely around. Virtual connections, which rely on limited bandwidth — and therefore limited vulnerability on our part — seem like a much lower risk alternative than pouring our hearts out in person. This is especially true if we can remain anonymous.

It’s All About Human Hardware

The idea of virtual/physical hybrids with expanded revenue streams will be very attractive to marketers and event organizers. There will be many jumping on this bandwagon. But, like all the new opportunities that technology brings us, it has to interface with a system that has been around for hundreds of thousands of years — otherwise known as our brain.

The Importance of Playing Make-Believe

One of my favourite sounds in the world is children playing. Although our children are well past that age, we have stayed in a neighbourhood where new families move in all the time. One of the things that has always amazed me is a child’s ability to make believe. I used to do this but I don’t any more. At least, I don’t do it the same way I used to.

Just take a minute to think about the term itself: make-believe. The very words connote the creation of an imaginary world that you and your playmates can share, even in that brief and fleeting moment. Out of the ether, you can create an ephemeral reality where you can play God. A few adults can still do that. George R.R. Martin pulled it off. J.K. Rowling did likewise. But for most of us, our days of make-believe are well behind us.

I worry about the state of play. I am concerned that rather than making believe themselves, children today are playing in the manufactured and highly commercialized imaginations of profit-hungry corporations. There is no making — there is only consuming. And that could have some serious consequences.

Although we don’t use imagination the way we once did, it is the foundation for the most importance cognitive tasks we do. It was Albert Einstein who said, “Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

It is imagination that connects the dots, explores the “what-ifs” and peeks beyond the bounds of the known. It is what separates us from machines.

In that, Einstein presciently nailed the importance of imagination. Only here does the mysterious alchemy of the human mind somehow magically weave fully formed worlds out of nothingness and snippets of reality. We may not play princess anymore, but our ability to imagine underpins everything of substance that we think about.

The importance of playing make-believe is more than just cognition. Imagination is also essential to our ability to empathize. We need it to put ourselves in place of others. Our “theory of mind” is just another instance of the many facets of imagination.

This thing we take for granted has been linked to a massive range of essential cognitive developments. In addition to the above examples, pretending gives children a safe place to begin to define their own place in society. It helps them explore interpersonal relationships. It creates the framework for them to assimilate information from the world into their own representation of reality.

We are not the only animals that play when we’re young. It’s true for many mammals, and scientists have discovered it’s also essential in species as diverse as crocodiles, turtles, octopuses and even wasps.

For other species, though, it seems play is mainly intended to help come to terms with surviving in the physical world.  We’re alone in our need for elaborate play involving imagination and cognitive games.

With typical human hubris, we adults have been on a century-long mission to structure the act of play. In doing so, we have been imposing our own rules, frameworks and expectations on something we should be keeping as is. Much of the value of play comes from its very lack of structure. Playing isn’t as effective when it’s done under adult supervision. Kids have to be kids.

Play definitely loses much of its value when it becomes passive consumption of content imagined and presented by others through digital entertainment channels. Childhood is meant to give us a blank canvas to colour with our imagination.

As we grow, the real world encroaches on this canvas.  But the delivery of child-targeted content through technology is also shrinking the boundaries of our own imagination.

Still, despite corporate interests that run counter to playing in its purest sense, I suspect that children may be more resilient than I fear. After all, I can still hear the children playing next door. And their imaginations still awe and inspire me.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

I’ll Take Reality with a Side of Augmentation, Please….

We don’t want to replace reality. We just want to nudge it a little.

At least, that seems to be the upshot of a new survey from the International law firm Perkins Coie. The firm asked start-up founders, tech execs, investors and consultants about their predictions for both Augmented (AR) and Virtual (VR) Reality. While Virtual Reality had a head start, the majority of those surveyed (67%) felt that AR would overtake VR in revenue within the next 3 years.

The reasons they gave were mainly focused on roadblocks in the technology itself: VR headsets were too bulky, the user experience was not smooth enough due to technical limitations, the cost of adopting VR was higher than AR and there was not enough content available in the VR universe.

I think there’s another reason. We actually like reality. We’re not looking to isolate ourselves from reality. We’re looking to enhance it.

Granted, if we are talking about adoption rates, there seems to be a lot more potential applications for Augmented Reality. Everything you do could stand a little augmentation. For example. you could probably do your job better if your own abilities were augmented with real time information. Pilots would be better at flying. Teachers would be better at teaching. Surgeons would be better at performing surgery. Mechanics would be better at fixing things.

You could also enjoy things more with a little augmentation. Looking for a restaurant would be easier. Taking a tour would be more informative. Attending a play or watching a movie could be candidates for a little augmented content. AR could even make your layover at an airport less interminable.

I think of VR as a novelty. The sheer nerdiness of it makes it a technology of limited appeal. As one developer quoted in the study says, “Not everyone is a gadget freak. The industry needs to appeal to those who aren’t.” AR has a clearly understood user benefit. We can all grasp a scenario where augmentation could make our lives better in some way. But it’s hard to understand how VR would have a real impact on our day to day lives. Its appeal seems to be constrained to entertainment, and even then, it’s entertainment aimed at a limited market.

The AR wave is advancing in some interesting directions. Google Glass has retreated from the consumer market and is currently concentrating on business and industrial application. The premise of Glass is to allow you to work smarter, access instant expertise and stay hands on. Bose is betting on a subset of AR, which it dubs Aural Augmentation. It believes sound is the best way to add content to our lives. And even Amazon has borrowed an idea from IKEA and stepped into the AR ring with Amazon AR View, where you can place items you’re considering buying in your home to see if they are a fit before you buy.

One big player that is still betting heavily on VR is Facebook, with its Oculus headset. This is not surprising, given that Mark Zuckerberg is the quintessential geek and seems intent on manufacturing our own social reality for us. In a demonstration a year ago, Zuckerberg struck all kinds of tone deaf clunkers when he and Facebook social VR chief Rachel Franklin took on cartoon personas to take a VR tour of devastated Puerto Rico. The juxtaposition could only be described as weird..a scene of human misery that was all too real visited by a cartoon Zuckerberg. At one point, he enthused “It feels like we’re really here in Puerto Rico.”

zuckerbergvrYou weren’t Mark. You were safely in Facebook headquarters Menlo Park, California –  wearing a headset that made you look like a dork. That was the reality.

Bose Planning to Add a Soundtrack to Our World

Bose is placing a big bet on AR…

Or more correctly: AAR.

When we think of AR (Augmented Reality) we tend to think of digital data superimposed on our field of vision. But Bose is sticking to their wheelhouse and bringing audio to our augmented world – hence AAR – Audio Augmented Reality.

For me – who started my career as a radio copywriter and producer – it’s an intriguing idea. And it just might be a perfect match for how our senses parse the world around us.

Sound tends to be underappreciated when we think about how we experience the world. But it packs a hell of an emotional wallop. Theme park designers have known this for years. They call it underscoring. That’s the music that you hear when you walk down Main Street USA in Disneyland (which could be the Desecration Rag by Felix Arndt), or visit the Wizarding World of Harry Potter at Universal (perhaps Hedwig’s Theme by John Williams). You might not even be aware of it. But it bubbles just below the level of consciousness, wiring itself directly to your emotional hot buttons. Theme parks would be much less appealing without a sound track. The same is true for the world in general

Cognitively, we process sound entirely differently than we process sights. Our primary sensory portal is through our eyes and because of this, it tends to dominate our attentional focus. This means the brain has limited bandwidth to process conflicting visual stimuli. If we layer additional information over our view of the world, as most AR does, we force the brain to make a context switch. Even with a heads up display, the brain has to switch between the two. We can’t concentrate on both at the same time.

But our brains can handle the job of combining sight and sound very nicely. It’s what we evolved to do. We automatically synthesize the two. Unlike visual information which must borrow attention from something else, sight and sound is not a zero sum game.

Bose made their announcement at SXSW, but I first became aware of the plan just last week. And I became aware because Bose had bought out Detour, a start up based in San Francisco that produced audio immersive walking tours. I was using the Detour platform to create audio tours that could be done on bike. At the end of February, I received an email abruptly announcing that access to the Detour platform would end on the very next day. I’ve been around the high tech biz long enough to know that there was more to this than just a simple discontinuation of the platform. There was another shoe that was yet to drop.

Last week, it dropped. The reason for the abrupt end was that Detour had been purchased by Bose.

Although Detour never gained the traction that I’m sure founder Andrew Mason (who was also the founder of GroupOn) hoped for, the tours were exceptionally well produced. I had the opportunity to take several of them while in San Francisco. It was my first real experience with augmented audio reality. I felt like I was walking through a documentary. At no time did I feel my attention was torn. For the most part, my phone stayed in my pocket. It was damned near seamless.

Regular readers of mine will know that I’m more than a little apprehensive about the whole area of Virtual and Augmented Reality. But I have to admit, Bose’s approach sounds pretty good so far.

 

 

 

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Damn You Technology…

Quit batting your seductive visual sensors at me. You know I can’t resist. But I often wonder what I’m giving up when I give in to your temptations. That’s why I was interested in reading Tom Goodwin’s take on the major theme at SXSW – the Battle for Humanity. He broke this down into three sub themes. I agree with them. In fact, I’ve written on all of them in the past. They were:

Data Trading – We’re creating a market for data. But when you’re the one that generated that data, who should own it?

Shift to No Screens – an increasing number of connected devices will change of concept of what it means to be online.

Content Tunnel Vision – As the content we see is increasingly filtered based on our preferences, what does that do for our perception of what is real?

But while we’re talking about our imminent surrender to the machines, I feel there are some other themes that also merit some discussion. Let’s limit it to two today.

A New Definition of Connection and Community

sapolsky

Robert Sapolsky

A few weeks ago I read an article that I found fascinating by neuroendocrinologist and author Robert Sapolsky. In it, he posits that understanding Capgras Syndrome is the key to understanding the Facebook society. Capgras, first identified by French psychiatrist Joseph Capgras, is a disorder where we can recognize a face of a person but we can’t retrieve feelings of familiarity. Those afflicted can identify the face of a loved one but swear that it’s actually an identical imposter. Recognition of a person and retrieval of emotions attached to that person are handled by two different parts of the brain. When the connection is broken, Capgras Syndrome is the result.

This bifurcation of how we identify people is interesting. There is the yin and yang of cognition and emotion. The fusiform gyrus cognitively “parses” the face and then the brain retrieves the emotions and memories that are associated with it. To a normally functioning brain, it seems seamless and connected, but because two different regions (or, in the case of emotion, a network of regions) are involved, they can neurologically evolve independently of each other. And in the age of Facebook, that could mean a significant shift in the way we recognize connections and create “cognitive communities.” Sapolsky elaborates:

Through history, Capgras syndrome has been a cultural mirror of a dissociative mind, where thoughts of recognition and feelings of intimacy have been sundered. It is still that mirror. Today we think that what is false and artificial in the world around us is substantive and meaningful. It’s not that loved ones and friends are mistaken for simulations, but that simulations are mistaken for them.

As I said in a column a few months back, we are substituting surface cues for familiarity. We are rushing into intimacy without all the messy, time consuming process of understanding and shared experience that generally accompanies it.

Brains do love to take short cuts. They’re not big on heavy lifting. Here’s another example of that…

Free Will is Replaced with An Algorithm

harari

Yuval Harari

In a conversation with historian Yuval Harari, author of the best seller Sapiens, Derek Thompson from the Atlantic explored “The Post Human World.” One of the topics they discussed was the End of Individualism.

Humans (or, at least, most humans) have believed our decisions come from a mystical soul – a transcendental something that lives above our base biology and is in control of our will. Wrapped up in this is the concept of us as an individual and our importance in the world as free thinking agents.

In the past few decades, there is a growing realization that our notion of “free will” is just the result of a cascade of biochemical processes. There is nothing magical here; there is just a chain of synaptic switches being thrown. And that being the case – if a computer can process things faster than our brains, should we simply relegate our thinking to a machine?

In many ways, this is already happening. We trust Google Maps or our GPS device more than we trust our ability to find our own way. We trust Google Search more than our own memory. We’re on the verge of trusting our wearable fitness tracking devices more than our own body’s feedback. And in all these cases, our trust in tech is justified. These things are usually right more often than we are. But when it comes to humans vs, machines, they represent a slippery slope that we’re already well down. Harari speculates what might be at the bottom:

What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you.

When I lay awake worrying about technology, these are the types of things that I think about. The big question is – is humanity an outmoded model? The fact is that we evolved to be successful in a certain environment. But here’s the irony in that: we were so successful that we changed that environment to one where it was the tools we’ve created, not the creators, which are the most successful adaptation. We may have made ourselves obsolete. And that’s why really smart humans, like Bill Gates, Elon Musk and Stephen Hawking are so worried about artificial intelligence.

“It would take off on its own, and re-design itself at an ever increasing rate,” said Hawking in a recent interview with BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Worried about a machine taking your job? That may be the least of your worries.

 

 

Why Millennials are so Fascinating

When I was growing up, there was a lot of talk about the Generation Gap. This referred to the ideological gap between my generation – the Baby Boomers, and our parent’s generation – The Silent Generation (1923 – 1944).

But in terms of behavior, there was a significant gap even amongst early Baby Boomers and those that came at the tail end of the boom – like myself. Generations are products of their environment and there was a significant change in our environment in the 20-year run of the Baby Boomers – from 1945 to 1964. During that time, TV came into most of our homes. For the later boomers, like myself, we were raised with TV. And I believe the adoption of that one technology created an unbridgeable ideological gap that is still impacting our society.

The adoption of ubiquitous technologies – like TV and, more recently, connective platforms like mobile phones and the Internet – inevitable trigger massive environmental shifts. This is especially true for generations that grow up with this technology. Our brain goes through two phases where it literally rewires itself to adapt to its environment. One of those phases happens from birth to about 2 to 3 years of age and the other happens during puberty – from 14 to 20 years of age. A generation that goes through both of those phases while exposed to a new technology will inevitably be quite different from the generation that preceded it.

The two phases of our brain’s restructuring – also called neuroplasticity – are quite different in their goals. The first period – right after birth – rewires the brain to adapt to its physical environment. We learn to adapt to external stimuli and to interact with our surroundings. The second phase is perhaps even more influential in terms of who we will eventually be. This is when our brain creates its social connections. It’s also when we set our ideological compasses. Technologies we spend a huge amount of time with will inevitably impact both those processes.

That’s what makes Millennials so fascinating. It’s probably the first generation since my own that bridges that adoption of a massively influential technological change. Most definitions of this generation have it starting in the early 80’s and extend it to 1996 or 97.   This means the early Millennials grew up in an environment that was not all that different than the generation that preceded it. The technologies that were undergoing massive adoption in the early 80’s were VCRs and microwaves – hardly earth shaking in terms of environmental change. But late Millennials, like my daughters, grew up during the rapid adoption of three massively disruptive technologies: mobile phones, computers and the Internet. So we have a completely different environment for which the brain must adapt not only from generation to generation, but within the generation itself. This makes Millennials a very complex generation to pin down.

In terms of trying to understand this, let’s go back to my generation – the Baby Boomers – to see how environment adaptation can alter the face of society. Boomers that grew up in the late 40’s and early 50’s were much different than boomers that grew up just a few years later. Early boomers probably didn’t have a TV. Only the wealthiest families would have been able to afford them. In 1951, only 24% of American homes had a TV. But by 1960, almost 90% of Americans had a TV.

Whether we like to admit it or not, the values of my generation where shaped by TV. But this was not a universal process. The impact of TV was dependent on household income, which would have been correlated with education. So TV impacted the societal elite first and then trickled down. This elite segment would have also been those most likely to attend college. So, in the mid-60’s, you had a segment of a generation who’s values and world view were at least partially shaped by TV – and it’s creation of a “global village” – and who suddenly came together during a time and place (college) when we build the persona foundations we will inhabit for the rest of our lives. You had another segment of a generation that didn’t have this same exposure and who didn’t pursue a post-secondary education. The Vietnam War didn’t create the Counter-Cultural revolution. It just gave it a handy focal point that highlighted the ideological rift not only between two generations but also within the Baby Boomers themselves. At that point in history, part of our society turned right and part turned left.

Is the same thing happening with Millennials now? Certainly the worldview of at least the younger Millennials has been shaped through exposure to connected media. When polled, they inevitably have dramatically different opinions about things like religion, politics, science – well – pretty much everything. But even within the Millennial camp, their views often seem incoherent and confusing. Perhaps another intra-generational divide is forming. The fact is it’s probably too early to tell. These things take time to play out. But if it plays out like it did last time this happened, the impact will still be felt a half century from now.