To Be There – Or Not To Be There

According to Eventbrite, hybrid events are the hottest thing for 2021. So I started thinking, what would that possibly look like, as a planner or a participant?

The interesting thing about hybrid events is that they force us to really think about how we experience things. What process do we go through when we let the outside world in? What do we lose if we do that virtually? What do we gain, if anything? And, more importantly, how do we connect with other people during those experiences?

These are questions we didn’t think much about even a year ago. But today, in a reality that’s trying to straddle both the physical and virtual worlds, they are highly relevant to how we’ll live our lives in the future.

The Italian Cooking Lesson

First, let’s try a little thought experiment.

In our town, the local Italian Club — in which both my wife and I are involved — offered cooking lessons before we were all locked down. Groups of eight to 12 people would get together with an exuberant Italian chef in a large commercial kitchen, and together they would make an authentic dish like gnocchi or ravioli. There was a little vino, a little Italian culture and a lot of laughter. These classes were a tremendous hit.

That all ended last March. But we hope to we start thinking about offering them again late in 2021 or 2022. And, if we do, would it make sense to offer them as a “hybrid” event, where you can participate in person or pick up a box of preselected ingredients and follow along in your own kitchen?

As an event organizer, this would be tempting. You can still charge the full price for physical attendance where you’re restricted to 12 people, but you could create an additional revenue stream by introducing a virtual option that could involve as many people as possible. Even at a lower registration fee, it would still dramatically increase revenue at a relatively small incremental cost. It would be “molto” profitable.

But now consider this as an attendee.Would you sign up for a virtual event like that? If you had no other option to experience it, maybe. But what if you could actually be there in person? Then what? Would you feel relegated to a second-class experience by being isolated in your own kitchen, without many of the sensory benefits that go along with the physical experience?

The Psychology of Zoom Fatigue

When I thought about our cooking lesson example, I was feeling less than enthused. And I wondered why.

It turns out that there’s some actual brain science behind my digital ennui. In an article in the Psychiatric Times, Jena Lee, MD, takes us on a “Neuropsychological Exploration of Zoom Fatigue.”

A decade ago, I was writing a lot about how we balance risk and reward. I believe that a lot of our behaviors can be explained by how we calculate the dynamic tension between those two things. It turns out that it may also be at the root of how we feel about virtual events. Dr. Lee explains,

“A core psychological component of fatigue is a rewards-costs trade-off that happens in our minds unconsciously. Basically, at every level of behavior, a trade-off is made between the likely rewards versus costs of engaging in a certain activity.”

Let’s take our Italian cooking class again. Let’s imagine we’re there in person. For our brain, this would hit all the right “reward” buttons that come with being physically “in the moment.” Subconsciously, our brains would reward us by releasing oxytocin and dopamine along with other “pleasure” neurochemicals that would make the experience highly enjoyable for us. The cost/reward calculation would be heavily weighted toward “reward.”

But that’s not the case with the virtual event. Yes, it might still be considered “rewarding,” but at an entirely different — and lesser — scale of the same “in-person” experience. In addition, we would have the additional costs of figuring out the technology required, logging into the lesson and trying to follow along. Our risk/reward calculator just might decide the tradeoffs required weren’t worth it.

Without me even knowing it, this was the calculation that was going on in my head that left me less than enthused.

 But there is a flip side to this.

Reducing the Risk Virtually

Last fall, a new study from Oracle in the U.K. was published with the headline, “82% of People Believe Robots Can Support Their Mental Health Better than Humans.”

Something about that just didn’t seem right to me. How could this be? Again, we had the choice between virtual and physical connection, and this time the odds were overwhelmingly in favor of the virtual option.

But when I thought about it in terms of risk and reward, it suddenly made sense. Talking about our own mental health is a high-risk activity. It’s sad to say, but opening up to your manager about job-related stress could get you a sympathetic ear, or it could get you fired. We are taking baby steps towards destigmatizing mental health issues, but we’re at the beginning of a very long journey.

In this case, the risk/reward calculation is flipped completely around. Virtual connections, which rely on limited bandwidth — and therefore limited vulnerability on our part — seem like a much lower risk alternative than pouring our hearts out in person. This is especially true if we can remain anonymous.

It’s All About Human Hardware

The idea of virtual/physical hybrids with expanded revenue streams will be very attractive to marketers and event organizers. There will be many jumping on this bandwagon. But, like all the new opportunities that technology brings us, it has to interface with a system that has been around for hundreds of thousands of years — otherwise known as our brain.

The Importance of Playing Make-Believe

One of my favourite sounds in the world is children playing. Although our children are well past that age, we have stayed in a neighbourhood where new families move in all the time. One of the things that has always amazed me is a child’s ability to make believe. I used to do this but I don’t any more. At least, I don’t do it the same way I used to.

Just take a minute to think about the term itself: make-believe. The very words connote the creation of an imaginary world that you and your playmates can share, even in that brief and fleeting moment. Out of the ether, you can create an ephemeral reality where you can play God. A few adults can still do that. George R.R. Martin pulled it off. J.K. Rowling did likewise. But for most of us, our days of make-believe are well behind us.

I worry about the state of play. I am concerned that rather than making believe themselves, children today are playing in the manufactured and highly commercialized imaginations of profit-hungry corporations. There is no making — there is only consuming. And that could have some serious consequences.

Although we don’t use imagination the way we once did, it is the foundation for the most importance cognitive tasks we do. It was Albert Einstein who said, “Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

It is imagination that connects the dots, explores the “what-ifs” and peeks beyond the bounds of the known. It is what separates us from machines.

In that, Einstein presciently nailed the importance of imagination. Only here does the mysterious alchemy of the human mind somehow magically weave fully formed worlds out of nothingness and snippets of reality. We may not play princess anymore, but our ability to imagine underpins everything of substance that we think about.

The importance of playing make-believe is more than just cognition. Imagination is also essential to our ability to empathize. We need it to put ourselves in place of others. Our “theory of mind” is just another instance of the many facets of imagination.

This thing we take for granted has been linked to a massive range of essential cognitive developments. In addition to the above examples, pretending gives children a safe place to begin to define their own place in society. It helps them explore interpersonal relationships. It creates the framework for them to assimilate information from the world into their own representation of reality.

We are not the only animals that play when we’re young. It’s true for many mammals, and scientists have discovered it’s also essential in species as diverse as crocodiles, turtles, octopuses and even wasps.

For other species, though, it seems play is mainly intended to help come to terms with surviving in the physical world.  We’re alone in our need for elaborate play involving imagination and cognitive games.

With typical human hubris, we adults have been on a century-long mission to structure the act of play. In doing so, we have been imposing our own rules, frameworks and expectations on something we should be keeping as is. Much of the value of play comes from its very lack of structure. Playing isn’t as effective when it’s done under adult supervision. Kids have to be kids.

Play definitely loses much of its value when it becomes passive consumption of content imagined and presented by others through digital entertainment channels. Childhood is meant to give us a blank canvas to colour with our imagination.

As we grow, the real world encroaches on this canvas.  But the delivery of child-targeted content through technology is also shrinking the boundaries of our own imagination.

Still, despite corporate interests that run counter to playing in its purest sense, I suspect that children may be more resilient than I fear. After all, I can still hear the children playing next door. And their imaginations still awe and inspire me.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

I’ll Take Reality with a Side of Augmentation, Please….

We don’t want to replace reality. We just want to nudge it a little.

At least, that seems to be the upshot of a new survey from the International law firm Perkins Coie. The firm asked start-up founders, tech execs, investors and consultants about their predictions for both Augmented (AR) and Virtual (VR) Reality. While Virtual Reality had a head start, the majority of those surveyed (67%) felt that AR would overtake VR in revenue within the next 3 years.

The reasons they gave were mainly focused on roadblocks in the technology itself: VR headsets were too bulky, the user experience was not smooth enough due to technical limitations, the cost of adopting VR was higher than AR and there was not enough content available in the VR universe.

I think there’s another reason. We actually like reality. We’re not looking to isolate ourselves from reality. We’re looking to enhance it.

Granted, if we are talking about adoption rates, there seems to be a lot more potential applications for Augmented Reality. Everything you do could stand a little augmentation. For example. you could probably do your job better if your own abilities were augmented with real time information. Pilots would be better at flying. Teachers would be better at teaching. Surgeons would be better at performing surgery. Mechanics would be better at fixing things.

You could also enjoy things more with a little augmentation. Looking for a restaurant would be easier. Taking a tour would be more informative. Attending a play or watching a movie could be candidates for a little augmented content. AR could even make your layover at an airport less interminable.

I think of VR as a novelty. The sheer nerdiness of it makes it a technology of limited appeal. As one developer quoted in the study says, “Not everyone is a gadget freak. The industry needs to appeal to those who aren’t.” AR has a clearly understood user benefit. We can all grasp a scenario where augmentation could make our lives better in some way. But it’s hard to understand how VR would have a real impact on our day to day lives. Its appeal seems to be constrained to entertainment, and even then, it’s entertainment aimed at a limited market.

The AR wave is advancing in some interesting directions. Google Glass has retreated from the consumer market and is currently concentrating on business and industrial application. The premise of Glass is to allow you to work smarter, access instant expertise and stay hands on. Bose is betting on a subset of AR, which it dubs Aural Augmentation. It believes sound is the best way to add content to our lives. And even Amazon has borrowed an idea from IKEA and stepped into the AR ring with Amazon AR View, where you can place items you’re considering buying in your home to see if they are a fit before you buy.

One big player that is still betting heavily on VR is Facebook, with its Oculus headset. This is not surprising, given that Mark Zuckerberg is the quintessential geek and seems intent on manufacturing our own social reality for us. In a demonstration a year ago, Zuckerberg struck all kinds of tone deaf clunkers when he and Facebook social VR chief Rachel Franklin took on cartoon personas to take a VR tour of devastated Puerto Rico. The juxtaposition could only be described as weird..a scene of human misery that was all too real visited by a cartoon Zuckerberg. At one point, he enthused “It feels like we’re really here in Puerto Rico.”

zuckerbergvrYou weren’t Mark. You were safely in Facebook headquarters Menlo Park, California –  wearing a headset that made you look like a dork. That was the reality.

Bose Planning to Add a Soundtrack to Our World

Bose is placing a big bet on AR…

Or more correctly: AAR.

When we think of AR (Augmented Reality) we tend to think of digital data superimposed on our field of vision. But Bose is sticking to their wheelhouse and bringing audio to our augmented world – hence AAR – Audio Augmented Reality.

For me – who started my career as a radio copywriter and producer – it’s an intriguing idea. And it just might be a perfect match for how our senses parse the world around us.

Sound tends to be underappreciated when we think about how we experience the world. But it packs a hell of an emotional wallop. Theme park designers have known this for years. They call it underscoring. That’s the music that you hear when you walk down Main Street USA in Disneyland (which could be the Desecration Rag by Felix Arndt), or visit the Wizarding World of Harry Potter at Universal (perhaps Hedwig’s Theme by John Williams). You might not even be aware of it. But it bubbles just below the level of consciousness, wiring itself directly to your emotional hot buttons. Theme parks would be much less appealing without a sound track. The same is true for the world in general

Cognitively, we process sound entirely differently than we process sights. Our primary sensory portal is through our eyes and because of this, it tends to dominate our attentional focus. This means the brain has limited bandwidth to process conflicting visual stimuli. If we layer additional information over our view of the world, as most AR does, we force the brain to make a context switch. Even with a heads up display, the brain has to switch between the two. We can’t concentrate on both at the same time.

But our brains can handle the job of combining sight and sound very nicely. It’s what we evolved to do. We automatically synthesize the two. Unlike visual information which must borrow attention from something else, sight and sound is not a zero sum game.

Bose made their announcement at SXSW, but I first became aware of the plan just last week. And I became aware because Bose had bought out Detour, a start up based in San Francisco that produced audio immersive walking tours. I was using the Detour platform to create audio tours that could be done on bike. At the end of February, I received an email abruptly announcing that access to the Detour platform would end on the very next day. I’ve been around the high tech biz long enough to know that there was more to this than just a simple discontinuation of the platform. There was another shoe that was yet to drop.

Last week, it dropped. The reason for the abrupt end was that Detour had been purchased by Bose.

Although Detour never gained the traction that I’m sure founder Andrew Mason (who was also the founder of GroupOn) hoped for, the tours were exceptionally well produced. I had the opportunity to take several of them while in San Francisco. It was my first real experience with augmented audio reality. I felt like I was walking through a documentary. At no time did I feel my attention was torn. For the most part, my phone stayed in my pocket. It was damned near seamless.

Regular readers of mine will know that I’m more than a little apprehensive about the whole area of Virtual and Augmented Reality. But I have to admit, Bose’s approach sounds pretty good so far.

 

 

 

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Damn You Technology…

Quit batting your seductive visual sensors at me. You know I can’t resist. But I often wonder what I’m giving up when I give in to your temptations. That’s why I was interested in reading Tom Goodwin’s take on the major theme at SXSW – the Battle for Humanity. He broke this down into three sub themes. I agree with them. In fact, I’ve written on all of them in the past. They were:

Data Trading – We’re creating a market for data. But when you’re the one that generated that data, who should own it?

Shift to No Screens – an increasing number of connected devices will change of concept of what it means to be online.

Content Tunnel Vision – As the content we see is increasingly filtered based on our preferences, what does that do for our perception of what is real?

But while we’re talking about our imminent surrender to the machines, I feel there are some other themes that also merit some discussion. Let’s limit it to two today.

A New Definition of Connection and Community

sapolsky

Robert Sapolsky

A few weeks ago I read an article that I found fascinating by neuroendocrinologist and author Robert Sapolsky. In it, he posits that understanding Capgras Syndrome is the key to understanding the Facebook society. Capgras, first identified by French psychiatrist Joseph Capgras, is a disorder where we can recognize a face of a person but we can’t retrieve feelings of familiarity. Those afflicted can identify the face of a loved one but swear that it’s actually an identical imposter. Recognition of a person and retrieval of emotions attached to that person are handled by two different parts of the brain. When the connection is broken, Capgras Syndrome is the result.

This bifurcation of how we identify people is interesting. There is the yin and yang of cognition and emotion. The fusiform gyrus cognitively “parses” the face and then the brain retrieves the emotions and memories that are associated with it. To a normally functioning brain, it seems seamless and connected, but because two different regions (or, in the case of emotion, a network of regions) are involved, they can neurologically evolve independently of each other. And in the age of Facebook, that could mean a significant shift in the way we recognize connections and create “cognitive communities.” Sapolsky elaborates:

Through history, Capgras syndrome has been a cultural mirror of a dissociative mind, where thoughts of recognition and feelings of intimacy have been sundered. It is still that mirror. Today we think that what is false and artificial in the world around us is substantive and meaningful. It’s not that loved ones and friends are mistaken for simulations, but that simulations are mistaken for them.

As I said in a column a few months back, we are substituting surface cues for familiarity. We are rushing into intimacy without all the messy, time consuming process of understanding and shared experience that generally accompanies it.

Brains do love to take short cuts. They’re not big on heavy lifting. Here’s another example of that…

Free Will is Replaced with An Algorithm

harari

Yuval Harari

In a conversation with historian Yuval Harari, author of the best seller Sapiens, Derek Thompson from the Atlantic explored “The Post Human World.” One of the topics they discussed was the End of Individualism.

Humans (or, at least, most humans) have believed our decisions come from a mystical soul – a transcendental something that lives above our base biology and is in control of our will. Wrapped up in this is the concept of us as an individual and our importance in the world as free thinking agents.

In the past few decades, there is a growing realization that our notion of “free will” is just the result of a cascade of biochemical processes. There is nothing magical here; there is just a chain of synaptic switches being thrown. And that being the case – if a computer can process things faster than our brains, should we simply relegate our thinking to a machine?

In many ways, this is already happening. We trust Google Maps or our GPS device more than we trust our ability to find our own way. We trust Google Search more than our own memory. We’re on the verge of trusting our wearable fitness tracking devices more than our own body’s feedback. And in all these cases, our trust in tech is justified. These things are usually right more often than we are. But when it comes to humans vs, machines, they represent a slippery slope that we’re already well down. Harari speculates what might be at the bottom:

What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you.

When I lay awake worrying about technology, these are the types of things that I think about. The big question is – is humanity an outmoded model? The fact is that we evolved to be successful in a certain environment. But here’s the irony in that: we were so successful that we changed that environment to one where it was the tools we’ve created, not the creators, which are the most successful adaptation. We may have made ourselves obsolete. And that’s why really smart humans, like Bill Gates, Elon Musk and Stephen Hawking are so worried about artificial intelligence.

“It would take off on its own, and re-design itself at an ever increasing rate,” said Hawking in a recent interview with BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Worried about a machine taking your job? That may be the least of your worries.

 

 

Why Millennials are so Fascinating

When I was growing up, there was a lot of talk about the Generation Gap. This referred to the ideological gap between my generation – the Baby Boomers, and our parent’s generation – The Silent Generation (1923 – 1944).

But in terms of behavior, there was a significant gap even amongst early Baby Boomers and those that came at the tail end of the boom – like myself. Generations are products of their environment and there was a significant change in our environment in the 20-year run of the Baby Boomers – from 1945 to 1964. During that time, TV came into most of our homes. For the later boomers, like myself, we were raised with TV. And I believe the adoption of that one technology created an unbridgeable ideological gap that is still impacting our society.

The adoption of ubiquitous technologies – like TV and, more recently, connective platforms like mobile phones and the Internet – inevitable trigger massive environmental shifts. This is especially true for generations that grow up with this technology. Our brain goes through two phases where it literally rewires itself to adapt to its environment. One of those phases happens from birth to about 2 to 3 years of age and the other happens during puberty – from 14 to 20 years of age. A generation that goes through both of those phases while exposed to a new technology will inevitably be quite different from the generation that preceded it.

The two phases of our brain’s restructuring – also called neuroplasticity – are quite different in their goals. The first period – right after birth – rewires the brain to adapt to its physical environment. We learn to adapt to external stimuli and to interact with our surroundings. The second phase is perhaps even more influential in terms of who we will eventually be. This is when our brain creates its social connections. It’s also when we set our ideological compasses. Technologies we spend a huge amount of time with will inevitably impact both those processes.

That’s what makes Millennials so fascinating. It’s probably the first generation since my own that bridges that adoption of a massively influential technological change. Most definitions of this generation have it starting in the early 80’s and extend it to 1996 or 97.   This means the early Millennials grew up in an environment that was not all that different than the generation that preceded it. The technologies that were undergoing massive adoption in the early 80’s were VCRs and microwaves – hardly earth shaking in terms of environmental change. But late Millennials, like my daughters, grew up during the rapid adoption of three massively disruptive technologies: mobile phones, computers and the Internet. So we have a completely different environment for which the brain must adapt not only from generation to generation, but within the generation itself. This makes Millennials a very complex generation to pin down.

In terms of trying to understand this, let’s go back to my generation – the Baby Boomers – to see how environment adaptation can alter the face of society. Boomers that grew up in the late 40’s and early 50’s were much different than boomers that grew up just a few years later. Early boomers probably didn’t have a TV. Only the wealthiest families would have been able to afford them. In 1951, only 24% of American homes had a TV. But by 1960, almost 90% of Americans had a TV.

Whether we like to admit it or not, the values of my generation where shaped by TV. But this was not a universal process. The impact of TV was dependent on household income, which would have been correlated with education. So TV impacted the societal elite first and then trickled down. This elite segment would have also been those most likely to attend college. So, in the mid-60’s, you had a segment of a generation who’s values and world view were at least partially shaped by TV – and it’s creation of a “global village” – and who suddenly came together during a time and place (college) when we build the persona foundations we will inhabit for the rest of our lives. You had another segment of a generation that didn’t have this same exposure and who didn’t pursue a post-secondary education. The Vietnam War didn’t create the Counter-Cultural revolution. It just gave it a handy focal point that highlighted the ideological rift not only between two generations but also within the Baby Boomers themselves. At that point in history, part of our society turned right and part turned left.

Is the same thing happening with Millennials now? Certainly the worldview of at least the younger Millennials has been shaped through exposure to connected media. When polled, they inevitably have dramatically different opinions about things like religion, politics, science – well – pretty much everything. But even within the Millennial camp, their views often seem incoherent and confusing. Perhaps another intra-generational divide is forming. The fact is it’s probably too early to tell. These things take time to play out. But if it plays out like it did last time this happened, the impact will still be felt a half century from now.

NBC’s Grip on Olympic Gold Slipping

When it comes to benchmarking stuff, nothing holds a candle to the quadrennial sports-statzapooloza we call the Summer Olympics. After 3 years, 11 months and 13 days of not giving a crap about sports like team pursuit cycling or half heavyweight judo, we suddenly get into fist fights over 3 one hundredths of a second or an unawarded Yuko.

But it’s not just sports that are thrown into comparative focus by the Olympic games. It also provides a chance to take a snap shot of media consumption trends. The Olympics is probably the biggest show on earth. With the possible exception of the World Cup, it’s the time when the highest number of people on the planet are all watching the same thing at the same time. This makes it advertising nirvana.

Or it should.

Over the past few Olympics, the way we watch various events has been changing because of the nature of the Games themselves. There are 306 separate events in 35 recognized sports that are spread over 16 days of competition. The Olympics play to a global audience, which means that coverage has to span 24 time zones. At any given time, on any given day, there could be 6 or 7 events running simultaneously. In fact, as I’m writing this, diving, volleyball, men’s omnium cycling, Greco-Roman wresting, badminton, field hockey and boxing are all happening at the same time.

This creates a challenge for network TV coverage. The Olympics are hardly a one-size-fits-all spectacle. So, if you’re NBC and you’ve shelled out 1.6 billion dollars to provide coverage, you have a dilemma: how do you assemble the largest possible audience to show all those really expensive ads to? How do you keep all those advertisers happy?

NBC’s answer, it seems, is to repackage the Olympics as a scripted mini-series. It means throttling down real time streaming or live broadcast coverage on some of the big events so these can be assembled into packaged stories during their primetime coverage. NBC’s chief marketing officer, John Miller, was recently quoted as saying, “The people who watch the Olympics are not particularly sports fans. More women watch the games than men, and for the women, they’re less interested in the result and more interested in the journey. It’s sort of like the ultimate reality show and miniseries wrapped into one.”

So, how is this working out for NBC? Not so well, as it turns out.

Ratings are down, with NBC posting the lowest primetime numbers since 1992. The network has come under heavy fire for what is quite possibly the worst Olympic coverage in the history of the games. Let’s ignore for a moment their myopic focus on US contestants and a handful of superstars like Usain Bolt (which may not be irritating unless you’re a international viewer like myself). Their heavy-handed attempt to control and script the fragmented and emergent drama of any Olympic games has stumbled out of the blocks and fallen flat on its face.

I would categorize this as a “RTU/WTF” The first three letters stand for “Research tells us…” I think you can figure out the last three. I’m sure NBC did their research to figure out what they thought the audience really wanted in Olympics game coverage. I’m positive there was a focus group somewhere that told the network what they wanted to hear; “Screw real time results. What we really want is for you to tell us – with swelling music, extreme close ups and completely irrelevant vignettes– the human drama that lies behind the medals…” And, in the collective minds of NBC executives, they quickly added, “…with a zillion commercial breaks and sponsorship messages.”

But it appears that this isn’t what we want. It’s not even close. We want to see the sports we’re interested in, on our device of choice and at the time that best suits us.

This, in a nutshell, is the disruption that is broadsiding the advertising industry at full ramming speed. It was exactly what I was talking about in my last column. NBC may have been able to play their game when they were our only source of information and we were held captive by this scarcity. But over the past 3 Olympic games, starting in Athens in 2004, technology has essentially erased that scarcity. The reality no longer fits NBC’s strategy. Coverage of the Olympics is now a multi-channel affair. What we’re looking for is a way to filter the coverage based on what is most interesting to us, not to be spoon-fed the coverage that NBC feels has the highest revenue potential.

It’s a different world, NBC. If you’re planning to compete in Tokyo, you’d better change your game plan, because you’re still playing like it’s 1996.

 

 

 

What Would a “Time Well Spent” World Look Like?

I’m worried about us. And it’s not just because we seem bent on death by ultra-conservative parochialism and xenophobia. I’m worried because I believe we’re spending all our time doing the wrong things. We’re fiddling while Rome burns.

Technology is our new drug of choice and we’re hooked. We’re fascinated by the trivial. We’re dumping huge gobs of time down the drain playing virtual games, updating social statuses, clicking on clickbait and watching videos of epic wardrobe malfunctions. Humans should be better than this.

It’s okay to spend some time doing nothing. The brain needs some downtime. But something, somewhere has gone seriously wrong. We are now spending the majority of our lives doing useless things. TV used to be the biggest time suck, but in 2015, for the first time ever, the boob tube was overtaken by time spent with mobile apps. According to a survey conducted by Flurry, in the second quarter of 2015 we spent about 2.8 hours per day watching TV. And we spent 3.3 hours on mobile apps. That’s a grand total of 6.1 hours per day or one third of the time we spend awake. Yes, both things can happen at the same time, so there is undoubtedly overlap, but still- that’s a scary-assed statistic!

And it’s getting worse. In a previous Flurry poll conducted in 2013, we spent a total of 298 hours between TV and mobile apps versus 366 hours in 2015. That’s a 22.8% increase in just two years. We’re spending way more time doing nothing. And those totals don’t even include things like time spent in front of a gaming console. For kids, tack on an average of another 10 hours per week and you can double that for hard-core male gamers. Our addiction to gaming has even led to death in extreme cases.

Even in the wildest stretches of imagination, this can’t qualify as “time well spent.”

We’re treading on very dangerous and very thin ice here. And, we no longer have history to learn from. It’s the first time we’ve ever encountered this. Technology is now only one small degree of separation from plugging directly into the pleasure center of our brains. And science has proven that a good shot of self-administered dopamine can supersede everything –water, food, sex. True, these experiments were administered on rats – primarily because it’s been unethical to go too far on replicating the experiments with humans – but are you willing to risk the entire future of mankind on the bet that we’re really that much smarter than rats?

My fear is that technology is becoming a slightly more sophisticated lever we push to get that dopamine rush. And developers know exactly what they’re doing. They are making that lever as addictive as possible. They are pushing us towards the brink of death by technological lobotomization. They’re lulling us into a false sense of security by offering us the distraction of viral videos, infinitely scrolling social notification feeds and mobile game apps. It’s the intellectual equivalent of fast food – quite literally “brain candy.

Here the hypocrisy of for-profit interest becomes evident. The corporate response typically rests on individual freedom of choice and the consumer’s ability to exercise will power. “We are just giving them what they’re asking for,” touts the stereotypical PR flack. But if you have an entire industry with reams of developers and researchers all aiming to hook you on their addictive product and your only defense is the same faulty neurological defense system that has already fallen victim to fast food, porn, big tobacco, the alcohol industry and the $350 billion illegal drug trade, where would you be placing your bets?

Technology should be our greatest achievement. It should make us better, not turn us into a bunch of lazy screen-addicted louts. And it certainly could be this way. What would it mean if technology helped us spend our time well? This is the hope behind the Time Well Spent Manifesto. Ethan Harris, a design ethicist and product philosopher at Google is one of the co-directors. Here is an excerpt from the manifesto:

We believe in a new kind of design, that lets us connect without getting sucked in. And disconnect, without missing something important.

And we believe in a new kind economy that’s built to help us spend time well, where products compete to help us live by our values.

I believe in the Manifesto. I believe we’re being willingly led down a scary and potentially ruinous path. Worst of all, I believe there is nothing we can – or will – do about it. Problems like this are seldom solved by foresight and good intentions. Things only change after we drive off the cliff.

The problem is that most of us never see it coming. And we never see it coming because we’re too busy watching a video of masturbating monkeys on Youtube.