The Importance of Playing Make-Believe

One of my favourite sounds in the world is children playing. Although our children are well past that age, we have stayed in a neighbourhood where new families move in all the time. One of the things that has always amazed me is a child’s ability to make believe. I used to do this but I don’t any more. At least, I don’t do it the same way I used to.

Just take a minute to think about the term itself: make-believe. The very words connote the creation of an imaginary world that you and your playmates can share, even in that brief and fleeting moment. Out of the ether, you can create an ephemeral reality where you can play God. A few adults can still do that. George R.R. Martin pulled it off. J.K. Rowling did likewise. But for most of us, our days of make-believe are well behind us.

I worry about the state of play. I am concerned that rather than making believe themselves, children today are playing in the manufactured and highly commercialized imaginations of profit-hungry corporations. There is no making — there is only consuming. And that could have some serious consequences.

Although we don’t use imagination the way we once did, it is the foundation for the most importance cognitive tasks we do. It was Albert Einstein who said, “Imagination is more important than knowledge. For knowledge is limited, whereas imagination embraces the entire world, stimulating progress, giving birth to evolution.”

It is imagination that connects the dots, explores the “what-ifs” and peeks beyond the bounds of the known. It is what separates us from machines.

In that, Einstein presciently nailed the importance of imagination. Only here does the mysterious alchemy of the human mind somehow magically weave fully formed worlds out of nothingness and snippets of reality. We may not play princess anymore, but our ability to imagine underpins everything of substance that we think about.

The importance of playing make-believe is more than just cognition. Imagination is also essential to our ability to empathize. We need it to put ourselves in place of others. Our “theory of mind” is just another instance of the many facets of imagination.

This thing we take for granted has been linked to a massive range of essential cognitive developments. In addition to the above examples, pretending gives children a safe place to begin to define their own place in society. It helps them explore interpersonal relationships. It creates the framework for them to assimilate information from the world into their own representation of reality.

We are not the only animals that play when we’re young. It’s true for many mammals, and scientists have discovered it’s also essential in species as diverse as crocodiles, turtles, octopuses and even wasps.

For other species, though, it seems play is mainly intended to help come to terms with surviving in the physical world.  We’re alone in our need for elaborate play involving imagination and cognitive games.

With typical human hubris, we adults have been on a century-long mission to structure the act of play. In doing so, we have been imposing our own rules, frameworks and expectations on something we should be keeping as is. Much of the value of play comes from its very lack of structure. Playing isn’t as effective when it’s done under adult supervision. Kids have to be kids.

Play definitely loses much of its value when it becomes passive consumption of content imagined and presented by others through digital entertainment channels. Childhood is meant to give us a blank canvas to colour with our imagination.

As we grow, the real world encroaches on this canvas.  But the delivery of child-targeted content through technology is also shrinking the boundaries of our own imagination.

Still, despite corporate interests that run counter to playing in its purest sense, I suspect that children may be more resilient than I fear. After all, I can still hear the children playing next door. And their imaginations still awe and inspire me.

Less Tech = Fewer Regrets

In a tech ubiquitous world, I fear our reality is becoming more “tech” and less “world.”  But how do you fight that? Well, if you’re Kendall Marianacci – a recent college grad – you ditch your phone and move to Nepal. In that process she learned that, “paying attention to the life in front of you opens a new world.”

In a recent post, she reflected on lessons learned by truly getting off the grid:

“Not having any distractions of a phone and being immersed in this different world, I had to pay more attention to my surroundings. I took walks every day just to explore. I went out of my way to meet new people and ask them questions about their lives. When this became the norm, I realized I was living for one of the first times of my life. I was not in my own head distracted by where I was going and what I needed to do. I was just being. I was present and welcoming to the moment. I was compassionate and throwing myself into life with whoever was around me.”

It’s sad and a little shocking that we have to go to such extremes to realize how much of our world can be obscured by a little 5-inch screen. Where did tech that was supposed to make our lives better go off the rails? And was the derailment intentional?

“Absolutely,” says Jesse Weaver, a product designer. In a post on Medium.com, he lays out – in alarming terms – our tech dependency and the trade-off we’re agreeing to:

“The digital world, as we’ve designed it, is draining us. The products and services we use are like needy friends: desperate and demanding. Yet we can’t step away. We’re in a codependent relationship. Our products never seem to have enough, and we’re always willing to give a little more. They need our data, files, photos, posts, friends, cars, and houses. They need every second of our attention.

We’re willing to give these things to our digital products because the products themselves are so useful. Product designers are experts at delivering utility. “

But are they? Yes, there is utility here, but it’s wrapped in a thick layer of addiction. What product designers are really good at is fostering addiction by dangling a carrot of utility. And, as Weaver points out, we often mistake utility for empowerment,

“Empowerment means becoming more confident, especially in controlling our own lives and asserting our rights. That is not technology’s current paradigm. Quite often, our interactions with these useful products leave us feeling depressed, diminished, and frustrated.”

That’s not just Weaver’s opinion. A new study from HumaneTech.com backs it up with empirical evidence. They partnered with Moment, a screen time tracking app, “to ask how much screen time in apps left people feeling happy, and how much time left them in regret.”

According to 200,000 iPhone users, here are the apps that make people happiest:

  1. Calm
  2. Google Calendar
  3. Headspace
  4. Insight Timer
  5. The Weather
  6. MyFitnessPal
  7. Audible
  8. Waze
  9. Amazon Music
  10. Podcasts

That’s three meditative apps, three utilitarian apps, one fitness app, one entertainment app and two apps that help you broaden your intellectual horizons. If you are talking human empowerment – according to Weaver’s definition – you could do a lot worse than this round up.

But here were the apps that left their users with a feeling of regret:

  1. Grindr
  2. Candy Crush Saga
  3. Facebook
  4. WeChat
  5. Candy Crush
  6. Reddit
  7. Tweetbot
  8. Weibo
  9. Tinder
  10. Subway Surf

What is even more interesting is what the average time spent is for these apps. For the first group, the average daily usage was 9 minutes. For the regret group, the average daily time spent was 57 minutes! We feel better about apps that do their job, add something to our lives and then let us get on with living that life. What we hate are time sucks that may offer a kernel of functionality wrapped in an interface that ensnares us like a digital spider web.

This study comes from the Center for Humane Technology, headed by ex-Googler Tristan Harris. The goal of the Center is to encourage designers and developers to create apps that move “away from technology that extracts attention and erodes society, towards technology that protects our minds and replenishes society.”

That all sounds great, but what does it really mean for you and me and everybody else that hasn’t moved to Nepal? It all depends on what revenue model is driving development of these apps and platforms. If it is anything that depends on advertising – in any form – don’t count on any nobly intentioned shifts in design direction anytime soon. More likely, it will mean some half-hearted placations like Apple’s new Screen Time warning that pops up on your phone every Sunday, giving you the illusion of control over your behaviour.

Why an illusion? Because things like Apple’s Screen Time are great for our pre-frontal cortex, the intent driven part of our rational brain that puts our best intentions forward. They’re not so good for our Lizard brain, which subconsciously drives us to play Candy Crush and swipe our way through Tinder. And when it comes to addiction, the Lizard brain has been on a winning streak for most of the history of mankind. I don’t like our odds.

The developers escape hatch is always the same – they’re giving us control. It’s our own choice, and freedom of choice is always a good thing. But there is an unstated deception here. It’s the same lie that Mark Zuckerberg told last Wednesday when he laid out the privacy-focused future of Facebook. He’s putting us in control. But he’s not. What he’s doing is making us feel better about spending more time on Facebook.  And that’s exactly the problem. The less we worry about the time we spend on Facebook, the less we will think about it at all.  The less we think about it, the more time we will spend. And the more time we spend, the more we will regret it afterwards.

If that doesn’t seem like an addictive cycle, I’m not sure what does.

 

I’ll Take Reality with a Side of Augmentation, Please….

We don’t want to replace reality. We just want to nudge it a little.

At least, that seems to be the upshot of a new survey from the International law firm Perkins Coie. The firm asked start-up founders, tech execs, investors and consultants about their predictions for both Augmented (AR) and Virtual (VR) Reality. While Virtual Reality had a head start, the majority of those surveyed (67%) felt that AR would overtake VR in revenue within the next 3 years.

The reasons they gave were mainly focused on roadblocks in the technology itself: VR headsets were too bulky, the user experience was not smooth enough due to technical limitations, the cost of adopting VR was higher than AR and there was not enough content available in the VR universe.

I think there’s another reason. We actually like reality. We’re not looking to isolate ourselves from reality. We’re looking to enhance it.

Granted, if we are talking about adoption rates, there seems to be a lot more potential applications for Augmented Reality. Everything you do could stand a little augmentation. For example. you could probably do your job better if your own abilities were augmented with real time information. Pilots would be better at flying. Teachers would be better at teaching. Surgeons would be better at performing surgery. Mechanics would be better at fixing things.

You could also enjoy things more with a little augmentation. Looking for a restaurant would be easier. Taking a tour would be more informative. Attending a play or watching a movie could be candidates for a little augmented content. AR could even make your layover at an airport less interminable.

I think of VR as a novelty. The sheer nerdiness of it makes it a technology of limited appeal. As one developer quoted in the study says, “Not everyone is a gadget freak. The industry needs to appeal to those who aren’t.” AR has a clearly understood user benefit. We can all grasp a scenario where augmentation could make our lives better in some way. But it’s hard to understand how VR would have a real impact on our day to day lives. Its appeal seems to be constrained to entertainment, and even then, it’s entertainment aimed at a limited market.

The AR wave is advancing in some interesting directions. Google Glass has retreated from the consumer market and is currently concentrating on business and industrial application. The premise of Glass is to allow you to work smarter, access instant expertise and stay hands on. Bose is betting on a subset of AR, which it dubs Aural Augmentation. It believes sound is the best way to add content to our lives. And even Amazon has borrowed an idea from IKEA and stepped into the AR ring with Amazon AR View, where you can place items you’re considering buying in your home to see if they are a fit before you buy.

One big player that is still betting heavily on VR is Facebook, with its Oculus headset. This is not surprising, given that Mark Zuckerberg is the quintessential geek and seems intent on manufacturing our own social reality for us. In a demonstration a year ago, Zuckerberg struck all kinds of tone deaf clunkers when he and Facebook social VR chief Rachel Franklin took on cartoon personas to take a VR tour of devastated Puerto Rico. The juxtaposition could only be described as weird..a scene of human misery that was all too real visited by a cartoon Zuckerberg. At one point, he enthused “It feels like we’re really here in Puerto Rico.”

zuckerbergvrYou weren’t Mark. You were safely in Facebook headquarters Menlo Park, California –  wearing a headset that made you look like a dork. That was the reality.

Bose Planning to Add a Soundtrack to Our World

Bose is placing a big bet on AR…

Or more correctly: AAR.

When we think of AR (Augmented Reality) we tend to think of digital data superimposed on our field of vision. But Bose is sticking to their wheelhouse and bringing audio to our augmented world – hence AAR – Audio Augmented Reality.

For me – who started my career as a radio copywriter and producer – it’s an intriguing idea. And it just might be a perfect match for how our senses parse the world around us.

Sound tends to be underappreciated when we think about how we experience the world. But it packs a hell of an emotional wallop. Theme park designers have known this for years. They call it underscoring. That’s the music that you hear when you walk down Main Street USA in Disneyland (which could be the Desecration Rag by Felix Arndt), or visit the Wizarding World of Harry Potter at Universal (perhaps Hedwig’s Theme by John Williams). You might not even be aware of it. But it bubbles just below the level of consciousness, wiring itself directly to your emotional hot buttons. Theme parks would be much less appealing without a sound track. The same is true for the world in general

Cognitively, we process sound entirely differently than we process sights. Our primary sensory portal is through our eyes and because of this, it tends to dominate our attentional focus. This means the brain has limited bandwidth to process conflicting visual stimuli. If we layer additional information over our view of the world, as most AR does, we force the brain to make a context switch. Even with a heads up display, the brain has to switch between the two. We can’t concentrate on both at the same time.

But our brains can handle the job of combining sight and sound very nicely. It’s what we evolved to do. We automatically synthesize the two. Unlike visual information which must borrow attention from something else, sight and sound is not a zero sum game.

Bose made their announcement at SXSW, but I first became aware of the plan just last week. And I became aware because Bose had bought out Detour, a start up based in San Francisco that produced audio immersive walking tours. I was using the Detour platform to create audio tours that could be done on bike. At the end of February, I received an email abruptly announcing that access to the Detour platform would end on the very next day. I’ve been around the high tech biz long enough to know that there was more to this than just a simple discontinuation of the platform. There was another shoe that was yet to drop.

Last week, it dropped. The reason for the abrupt end was that Detour had been purchased by Bose.

Although Detour never gained the traction that I’m sure founder Andrew Mason (who was also the founder of GroupOn) hoped for, the tours were exceptionally well produced. I had the opportunity to take several of them while in San Francisco. It was my first real experience with augmented audio reality. I felt like I was walking through a documentary. At no time did I feel my attention was torn. For the most part, my phone stayed in my pocket. It was damned near seamless.

Regular readers of mine will know that I’m more than a little apprehensive about the whole area of Virtual and Augmented Reality. But I have to admit, Bose’s approach sounds pretty good so far.

 

 

 

Attention: Divided

I’d like you to give me your undivided attention. I’d like you to – but you can’t. First, I’m probably not interesting enough. Secondly, you no longer live in a world where that’s possible. And third, even if you could, I’m not sure I could handle it. I’m out of practice.

The fact is, our attention is almost never undivided anymore. Let’s take talking for example. You know; old-fashioned, face-to-face, sharing the same physical space communication. It’s the one channel that most demands undivided attention. But when is the last time you had a conversation where you were giving it 100 percent of your attention? I actually had one this past week, and I have to tell you, it unnerved me. I was meeting with a museum curator and she immediately locked eyes on me and gave me the full breadth of her attention span. I faltered. I couldn’t hold her gaze. As I talked I scanned the room we were in. It’s probably been years since someone did that to me. And nary a smart phone was in sight.

If this is true when we’re physically present, imagine the challenge in other channels. Take television, for instance. We don’t watch TV like we used to. When I was growing up, I would be verging on catatonia as I watched the sparks fly between Batman and Catwoman (the Julie Newmar version – with all due respect to Eartha Kitt and Lee Meriwether.) My dad used to call it the “idiot box.” At the time, I thought it was a comment on the quality of programming, but I now know realize he was referring to my mental state. You could have dropped a live badger in my lap and not an eye would have been batted.

But that’s definitely not how we watch TV now. A recent study indicates that 177 million Americans have at least one other screen going – usually a smartphone – while they watch TV. According to Nielsen, there are only 120 million TV households. That means that 1.48 adults per household are definitely dividing their attention amongst at least two devices while watching Game of Thrones. My daughters and wife are squarely in that camp. Ironically, I now get frustrated because they don’t watch TV the same way I do – catatonically.

Now, I’m sure watching TV does not represent the pinnacle of focused mindfulness. But this could be a canary in a coalmine. We simply don’t allocate undivided attention to anything anymore. We think we’re multi-tasking, but that’s a myth. We don’t multi-task – we mentally fidget. We have the average attention span of a gnat.

So, what is the price we’re paying for living in this attention deficit world? Well, first, there’s a price to be paid when we do decided to communicate. I’ve already stated how unnerving it was for me when I did have someone’s laser focused attention. But the opposite is also true. It’s tough to communicate with someone who is obviously paying little attention to you. Try presenting to a group that is more interested in chatting to each other. Research studies show that our ability to communicate effectively erodes quickly when we’re not getting feedback that the person or people we’re talking to are actually paying attention to us. Effective communication required an adequate allocation of attention on both ends; otherwise it spins into a downward spiral.

But it’s not just communication that suffers. It’s our ability to focus on anything. It’s just too damned tempting to pick up our smartphone and check it. We’re paying a price for our mythical multitasking – Boise State professor Nancy Napier suggests a simple test to prove this. Draw two lines on a piece of paper. While having someone time you, write “I am a great multi-tasker” on one, then write down the numbers from 1 to 20 on the other. Next, repeat this same exercise, but this time, alternate between the two: write “I” on the first line, then “1” on the second, then go back and write “a” on the first, “2” on the second and so on. What’s your time? It will probably be double what it was the first time.

Every time we try to mentally juggle, we’re more likely to drop a ball. Attention is important. But we keep allocating thinner and thinner slices of it. And a big part of the reason is the smart phone that is probably within arm’s reach of you right now. Why? Because of something called intermittent variable rewards. Slot machines use it. And that’s probably why slot machines make more money in the US than baseball, moves and theme parks combined. Tristan Harris, who is taking technology to task for hijacking our brains, explains the concept: “If you want to maximize addictiveness, all tech designers need to do is link a user’s action (like pulling a lever) with a variable reward. You pull a lever and immediately receive either an enticing reward (a match, a prize!) or nothing. Addictiveness is maximized when the rate of reward is most variable.”

Your smartphone is no different. In this case, the reward is a new email, Facebook post, Instagram photo or Tinder match. Intermittent variable rewards – together with the fear of missing out – makes your smartphone as addictive as a slot machine.

I’m sorry, but I’m no match for all of that.

Damn You Technology…

Quit batting your seductive visual sensors at me. You know I can’t resist. But I often wonder what I’m giving up when I give in to your temptations. That’s why I was interested in reading Tom Goodwin’s take on the major theme at SXSW – the Battle for Humanity. He broke this down into three sub themes. I agree with them. In fact, I’ve written on all of them in the past. They were:

Data Trading – We’re creating a market for data. But when you’re the one that generated that data, who should own it?

Shift to No Screens – an increasing number of connected devices will change of concept of what it means to be online.

Content Tunnel Vision – As the content we see is increasingly filtered based on our preferences, what does that do for our perception of what is real?

But while we’re talking about our imminent surrender to the machines, I feel there are some other themes that also merit some discussion. Let’s limit it to two today.

A New Definition of Connection and Community

sapolsky

Robert Sapolsky

A few weeks ago I read an article that I found fascinating by neuroendocrinologist and author Robert Sapolsky. In it, he posits that understanding Capgras Syndrome is the key to understanding the Facebook society. Capgras, first identified by French psychiatrist Joseph Capgras, is a disorder where we can recognize a face of a person but we can’t retrieve feelings of familiarity. Those afflicted can identify the face of a loved one but swear that it’s actually an identical imposter. Recognition of a person and retrieval of emotions attached to that person are handled by two different parts of the brain. When the connection is broken, Capgras Syndrome is the result.

This bifurcation of how we identify people is interesting. There is the yin and yang of cognition and emotion. The fusiform gyrus cognitively “parses” the face and then the brain retrieves the emotions and memories that are associated with it. To a normally functioning brain, it seems seamless and connected, but because two different regions (or, in the case of emotion, a network of regions) are involved, they can neurologically evolve independently of each other. And in the age of Facebook, that could mean a significant shift in the way we recognize connections and create “cognitive communities.” Sapolsky elaborates:

Through history, Capgras syndrome has been a cultural mirror of a dissociative mind, where thoughts of recognition and feelings of intimacy have been sundered. It is still that mirror. Today we think that what is false and artificial in the world around us is substantive and meaningful. It’s not that loved ones and friends are mistaken for simulations, but that simulations are mistaken for them.

As I said in a column a few months back, we are substituting surface cues for familiarity. We are rushing into intimacy without all the messy, time consuming process of understanding and shared experience that generally accompanies it.

Brains do love to take short cuts. They’re not big on heavy lifting. Here’s another example of that…

Free Will is Replaced with An Algorithm

harari

Yuval Harari

In a conversation with historian Yuval Harari, author of the best seller Sapiens, Derek Thompson from the Atlantic explored “The Post Human World.” One of the topics they discussed was the End of Individualism.

Humans (or, at least, most humans) have believed our decisions come from a mystical soul – a transcendental something that lives above our base biology and is in control of our will. Wrapped up in this is the concept of us as an individual and our importance in the world as free thinking agents.

In the past few decades, there is a growing realization that our notion of “free will” is just the result of a cascade of biochemical processes. There is nothing magical here; there is just a chain of synaptic switches being thrown. And that being the case – if a computer can process things faster than our brains, should we simply relegate our thinking to a machine?

In many ways, this is already happening. We trust Google Maps or our GPS device more than we trust our ability to find our own way. We trust Google Search more than our own memory. We’re on the verge of trusting our wearable fitness tracking devices more than our own body’s feedback. And in all these cases, our trust in tech is justified. These things are usually right more often than we are. But when it comes to humans vs, machines, they represent a slippery slope that we’re already well down. Harari speculates what might be at the bottom:

What really happens is that the self disintegrates. It’s not that you understand your true self better, but you come to realize there is no true self. There is just a complicated connection of biochemical connections, without a core. There is no authentic voice that lives inside you.

When I lay awake worrying about technology, these are the types of things that I think about. The big question is – is humanity an outmoded model? The fact is that we evolved to be successful in a certain environment. But here’s the irony in that: we were so successful that we changed that environment to one where it was the tools we’ve created, not the creators, which are the most successful adaptation. We may have made ourselves obsolete. And that’s why really smart humans, like Bill Gates, Elon Musk and Stephen Hawking are so worried about artificial intelligence.

“It would take off on its own, and re-design itself at an ever increasing rate,” said Hawking in a recent interview with BBC. “Humans, who are limited by slow biological evolution, couldn’t compete, and would be superseded.”

Worried about a machine taking your job? That may be the least of your worries.

 

 

Why Millennials are so Fascinating

When I was growing up, there was a lot of talk about the Generation Gap. This referred to the ideological gap between my generation – the Baby Boomers, and our parent’s generation – The Silent Generation (1923 – 1944).

But in terms of behavior, there was a significant gap even amongst early Baby Boomers and those that came at the tail end of the boom – like myself. Generations are products of their environment and there was a significant change in our environment in the 20-year run of the Baby Boomers – from 1945 to 1964. During that time, TV came into most of our homes. For the later boomers, like myself, we were raised with TV. And I believe the adoption of that one technology created an unbridgeable ideological gap that is still impacting our society.

The adoption of ubiquitous technologies – like TV and, more recently, connective platforms like mobile phones and the Internet – inevitable trigger massive environmental shifts. This is especially true for generations that grow up with this technology. Our brain goes through two phases where it literally rewires itself to adapt to its environment. One of those phases happens from birth to about 2 to 3 years of age and the other happens during puberty – from 14 to 20 years of age. A generation that goes through both of those phases while exposed to a new technology will inevitably be quite different from the generation that preceded it.

The two phases of our brain’s restructuring – also called neuroplasticity – are quite different in their goals. The first period – right after birth – rewires the brain to adapt to its physical environment. We learn to adapt to external stimuli and to interact with our surroundings. The second phase is perhaps even more influential in terms of who we will eventually be. This is when our brain creates its social connections. It’s also when we set our ideological compasses. Technologies we spend a huge amount of time with will inevitably impact both those processes.

That’s what makes Millennials so fascinating. It’s probably the first generation since my own that bridges that adoption of a massively influential technological change. Most definitions of this generation have it starting in the early 80’s and extend it to 1996 or 97.   This means the early Millennials grew up in an environment that was not all that different than the generation that preceded it. The technologies that were undergoing massive adoption in the early 80’s were VCRs and microwaves – hardly earth shaking in terms of environmental change. But late Millennials, like my daughters, grew up during the rapid adoption of three massively disruptive technologies: mobile phones, computers and the Internet. So we have a completely different environment for which the brain must adapt not only from generation to generation, but within the generation itself. This makes Millennials a very complex generation to pin down.

In terms of trying to understand this, let’s go back to my generation – the Baby Boomers – to see how environment adaptation can alter the face of society. Boomers that grew up in the late 40’s and early 50’s were much different than boomers that grew up just a few years later. Early boomers probably didn’t have a TV. Only the wealthiest families would have been able to afford them. In 1951, only 24% of American homes had a TV. But by 1960, almost 90% of Americans had a TV.

Whether we like to admit it or not, the values of my generation where shaped by TV. But this was not a universal process. The impact of TV was dependent on household income, which would have been correlated with education. So TV impacted the societal elite first and then trickled down. This elite segment would have also been those most likely to attend college. So, in the mid-60’s, you had a segment of a generation who’s values and world view were at least partially shaped by TV – and it’s creation of a “global village” – and who suddenly came together during a time and place (college) when we build the persona foundations we will inhabit for the rest of our lives. You had another segment of a generation that didn’t have this same exposure and who didn’t pursue a post-secondary education. The Vietnam War didn’t create the Counter-Cultural revolution. It just gave it a handy focal point that highlighted the ideological rift not only between two generations but also within the Baby Boomers themselves. At that point in history, part of our society turned right and part turned left.

Is the same thing happening with Millennials now? Certainly the worldview of at least the younger Millennials has been shaped through exposure to connected media. When polled, they inevitably have dramatically different opinions about things like religion, politics, science – well – pretty much everything. But even within the Millennial camp, their views often seem incoherent and confusing. Perhaps another intra-generational divide is forming. The fact is it’s probably too early to tell. These things take time to play out. But if it plays out like it did last time this happened, the impact will still be felt a half century from now.