There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

Do We Have the Emotional Bandwidth to Stay Curious?

Curiosity is good for the brain. It’s like exercise for our minds. It stretches the prefrontal cortex and whips the higher parts of our brains into gear. Curiosity also nudges our memory making muscles into action and builds our brain’s capacity to handle uncertain situations.

But it’s hard work – mentally speaking. It takes effort to be curious, especially in situations where curiosity could figuratively “kill the cat.” The more dangerous our environment, the less curious we become.

A while back I talked about why the world no longer seems to make sense. Part of this is tied to our appetite for curiosity. Actively trying to make sense of the world puts us “out there”, leaving the safe space of our established beliefs behind. It is literally the definition of an “open mind” – a mind that has left itself open to being changed. And that’s a very uncomfortable place to be when things seem to be falling down around our ears.

Some of us are naturally more curious than others. Curious people typically achieve higher levels of education (learning and curiosity are two sides of the same coin). They are less likely to accept things at face value. They apply critical thinking to situations as a matter of course. Their brains are wired to be rewarded with a bigger dopamine hit when they learn something new.

Others rely more on what they believe to be true. They actively filter out information that may challenge those beliefs. They double down on what is known and defend themselves from the unknown. For them, curiosity is not an invitation, it’s a threat.

Part of this is a differing tolerance for something which neuroscientists call “prediction error” – the difference between what we think will happen and what actually does happen. Non-curious people perceive predictive gaps as threats and respond accordingly, looking for something or someone to blame. They believe that it can’t be a mistaken belief that is to blame, it must be something else that caused the error. Curious people look at prediction errors as continually running scientific experiments, given them a chance to discover the errors in their current mental models and update them based on new information.

Our appetite for curiosity has a huge impact on where we turn to be informed. The incurious will turn to information sources that won’t challenge their beliefs. These are people who get their news from either end of the political bias spectrum, either consistently liberal or consistently conservative. Given that, they can’t really be called information sources so much as opinion platforms. Curious people are more willing to be introduced to non-conforming information. In terms of media bias, you’ll find them consuming news from the middle of the pack.

Given the current state of the world, more curiosity is needed but is becoming harder to find. When humans (or any animal, really) are threatened, we become less curious. This is a feature, not a bug. A curious brain takes a lot longer to make a decision than a non-curious one. It is the difference between thinking “fast” and “slow” – in the words of psychologist and Nobel laureate Daniel Kahneman. But this feature evolved when threats to humans were usually immediate and potentially fatal. A slow brain is not of any benefit if you’re at risk of being torn apart by a pack of jackals. But today, our jackal encounters are usually of the metaphorical type, not the literal one. And that’s a threat of a very different kind.

Whatever the threat, our brain throttles back our appetite for curiosity. Even the habitually curious develop defense mechanisms in an environment of consistently bad news. We seek solace in the trivial and avoid the consequential. We start saving cognitive bandwidth from whatever impending doom we may be facing. We seek media that affirms our beliefs rather than challenges them.

This is unfortunate, because the threats we face today could use a little more curiosity.

Grandparenting in a Wired World

You might have missed it, but last Sunday was Grandparents Day. And the world has a lot of grandparents. In fact, according to an article in The Economist (subscription required), at no time in history has the ratio of grandparents to grandchildren been higher.

The boom in Boomer and Gen X grandparents was statistically predictable. Sine 1960, global life expectancy has jumped from 51 years to 72 years. At the same time, the number of children a woman can expect to have in her lifetime has been halved, from 5 to 2.4. Those two trendlines means that the ratio of grandparents to children under 15 has vaulted from 0.46 in 1960 to 0.8 today. According to a little research the Economist conducted, it’s estimated that there are 1.5 billion grandparents in the world.

My wife and I are two of them.

So – what does that mean to the three generations involved?

Grandparents have historically served two roles. First, they, and by they, I mean typically the grandmother, provided an extra set of hands to help with child rearing. And that makes a significant difference to the child, especially if they were born in an underdeveloped part of the world. Children in poorer nations with actively involved grandparents have a higher chance of survival. And in Sub Saharan Africa, a child living with a grandparent is more likely to go to school.

But what about in developed nations, like ours? What difference could grandparents make? That brings us to the second role of grandparents – passing on traditions and instilling a sense of history. And with the western world’s obsession with fast forwarding into the future, that could prove to be of equal significance.

Here I have to shift from looking at global samples to focussing on the people that happen to be under our roof. I can’t tell you what’s happening around the world, but I can tell you what’s happening in our house.

First of all, when it comes to interacting with a grandchild, gender specific roles are not as tightly bound in my generation as it was in previous generations.  My wife and I pretty much split the grandparenting duties down the middle. It’s a coin toss as to who changes the diaper. That would be unheard of in my parents’ generation. Grandpa seldom pulled a diaper patrol shift.

Kids learn gender roles by looking at not just their parents but also their grandparents. The fact that it’s not solely the grandmother that provides nurturing, love and sustenance is a move in the right direction.

But for me, the biggest role of being “Papa” is to try to put today’s wired world in context. It’s something we talk about with our children and their partners. Just last weekend my son-in-law referred to how they think about screen time with my 2-year-old grandson: Heads up vs Heads down.  Heads up is when we share screen time with the grandchild, cuddling on the couch while we watch something on a shared screen. We’re there to comfort if something is a little too scary, or laugh with them if something is funny. As the child gets older, we can talk about the themes and concepts that come up. Heads up screen time is sharing time – and it’s one of my favorite things about being a “Papa”.

Heads down screen time is when the child is watching something on a tablet or phone by themselves, with no one sitting next to them. As they get older, this type of screen time becomes the norm and instead of a parent or grandparent hitting the play button to keep them occupied, they start finding their own diversions.  When we talk about the potential damage too much screentime can do, I suspect a lot of that comes from “heads down” screentime. Grandparents can play a big role in promoting a healthier approach to the many screens in our lives.

As mentioned, grandparents are a child’s most accessible link to their own history. And it’s not just grandparents. Increasingly, great grandparents are also a part of childhood. This was certainly not the case when I was young. I was at least a few decades removed from knowing any of my great grandparents.

This increasingly common connection gives yet another generational perspective. And it’s a perspective that is important. Sometimes, trying to bridge the gap across four generations is just too much for a young mind to comprehend. Grandparents can act as intergenerational interpreters – a bridge between the world of our parents and that of our grandchildren.

In my case, my mother and father-in-law were immigrants from Calabria in Southern Italy. Their childhood reality was set in World War Two. Their history spans experiences that would be hard for a child today to comprehend – the constant worry of food scarcity, having to leave their own grandparents (and often parents) behind to emigrate, struggling to cope in a foreign land far away from their family and friends.  I believe that the memories of these experiences cannot be forgotten. It is important to pass them on, because history is important. One of my favorite recent movie quotes was in “The Holdovers” and came from Paul Giamatti (who also had grandparents who came from Southern Italy):

“Before you dismiss something as boring or irrelevant, remember, if you truly want to understand the present or yourself, you must begin in the past. You see, history is not simply the study of the past. It is an explanation of the present.”

Grandparents can be the ones that connect the dots between past, present and future. It’s a big job – an important job. Thank heavens there are a lot of us to do it.

Why Time Seems to Fly Faster Every Year

Last week, I got an email congratulating me on being on LinkedIn for 20 years.

My first inclination was that it couldn’t be twenty years. But when I did the mental math, I realized it was right.  I first signed up in 2004. LinkedIn had just started 2 years before, in 2002.

LinkedIn would have been my first try at a social platform. I couldn’t see the point of MySpace, which started in 2003. And I was still a couple years away from even being aware Facebook existed. It started in 2004, but it was still known as TheFacebook. It wouldn’t become open to the public until 2006, two years later, after it dropped the “The”. So, 20 years pretty much marks the sum span of my involvement with social media.

Twenty years is a significant chunk of time. Depending on your genetics, it’s probably between a quarter and a fifth of your life. A lot can happen in 20 years. But we don’t process time the same way as we get older. 20 years when you’re 18 seems like a lot bigger chunk of time than it does when you’re in your 60’s.

I always mark these things in my far-off distant youth by my grad year, which was in 1979. If I use that as the starting point, rolling back 20 years would take me all the way to 1959, a year that seemed pre-historic to me when I was a teenager. That was a time of sock hops, funny cars with tail fins, and Frankie Avalon. These things all belonged to a different world than the one I knew in 1979. Ancient Rome couldn’t have been further removed from my reality.

Yet, that same span of time lies between me and the first time I set up my profile on LinkedIn. And that just seems like yesterday to me. This all got me wondering – do we process time differently as we age? The answer, it turns out, is yes. Time is time – but the perception of time is all in our heads.

The reason why we feel time “flies” as we get older was explained in a paper published by Professor Adrian Bejan. In it, he states, “The ‘mind time’ is a sequence of images, i.e. reflections of nature that are fed by stimuli from sensory organs. The rate at which changes in mental images are perceived decreases with age, because of several physical features that change with age: saccades frequency, body size, pathways degradation, etc. “

So, it’s not that time is moving faster, it’s just that our brain is processing it slower. If our perception of time is made up of mental snapshots of what is happening around us, we simply become slower at taking the snapshots as we get older. We notice less of what’s happening around us. I suspect it’s a combination of slower brains and perhaps not wanting to embrace a changing world quite as readily as we did when we were young. Maybe we don’t notice change because we don’t want things to change.

If we were using a more objective yardstick (speaking of which, when is the last time you actually used a yardstick?), I’m guessing the world changed at least as much between 2004 and 2024 as it did between 1959 and 1979. If I were at 18 years old today, I’m guessing that Britney Spears, The Lord of the Rings and the last episode of Frasier would seem as ancient to me as a young Elvis, Ben-Hur and The Danny Thomas Show seemed to me then.

To me, all these things seem like they were just yesterday. Which is probably why it comes as a bit of a shock to see a picture of Britney Spears today. She doesn’t look like the 22-year-old we remember, which we mistakenly remember as being just a few years ago. But Britney is 42 now, and as a 42-year-old, she’s held up pretty well.

And, now that I think of it, so has LinkedIn. I still have my profile, and I still use it.

Memories Made by Media

If you said the year 1967 to me, the memory that would pop into my head would be of Haight-Ashbury (ground zero for the counterculture movement), hippies and the summer of love. In fact, that same memory would effectively stand in for the period 1967 to 1969. In my mind, those three years were variations on the theme of Woodstock, the iconic music festival of 1969.

But none of those are my memories. I was alive, but my own memories of that time are indistinct and fuzzy. I was only 6 that year and lived in Alberta, some 1300 miles from the intersection of Haight and Ashbury Streets, so I have discarded my own personal representative memories. The ones I have were all created by images that came via media.

The Swapping of Memories

This is an example of the two types of memories we have – personal or “lived” memories and collective memories. Collective memories are the memories we get from outside, either for other people or, in my example, from media. As we age, there tends to be a flow back and forth between these two types or memories, with one type coloring the other.

One group of academics proposed an hourglass model as a working metaphor to understand this continuous exchange of memories – with some flowing one way and others flowing the other.  Often, we’re not even aware of which type of memory we’re recalling, personal or collective. Our memories are notoriously bad at reflecting reality.

What is true, however, is that our personal memories and our collective memories tend to get all mixed up. The lower our confidence in our personal memories, the more we tend to rely on collective memories. For periods before we were born, we rely solely on images we borrow.

Iconic Memories

What is true for all memories, ours or the ones we borrow from others, is we put them through a process called “leveling and sharpening.” This is a type of memory consolidation where we throw out some of the detail that is not important to us – this is leveling – and exaggerate other details to make it more interesting – i.e. sharpening.

Take my borrowed memories of 1967, for example. There was a lot more happening in the world than whatever was happening in San Francisco during the Summer of Love, but I haven’t retained any of it in my representative memory of that year. For example, there was a military coup in Greece, the first successful human heart transplant, the creation of the Corporation for Public Broadcasting, a series of deadly tornadoes in Chicago and Typhoon Emma left 140,000 people homeless in the Philippines. But none of that made it into my memory of 1967.

We could call the memories we do keep as “iconic” – which simply means we chose symbols to represent a much bigger and more complex reality – like everything that happened in a 365 day stretch 5 and a half decades ago.

Mass Manufactured Memories

Something else happens when we swap our own personal memories for collective memories – we find much more commonality in our memories. The more removed we become from our own lived experiences, the more our memories become common property.

If I asked you to say the first thing that comes to mind about 2002, you would probably look back through your own personal memory store to see if there was anything there. Chances are it would be a significant event from your own life, and this would make it unique to you. If we had a group of 50 people in a room and I asked that question, I would probably end up with 50 different answers.

But if I asked that same group what the first thing is that comes to mind when I say the year 1967, we would find much more common ground. And that ground would probably be defined by how each of us identify ourselves. For some you might have the same iconic memory that I do – that of the Haight Ashbury and the Summer of Love. Others may have picked the Vietnam War as the iconic memory from that year. But I would venture to guess that in our group of 50, we would end up with only a handful of answers.

When Memories are Made of Media

I am taking this walk down Memory Lane because I want to highlight how much we rely on the media to supply our collective memories. This dependency is critical, because once media images are processed by us and become part of our collective memories, they hold tremendous sway over our beliefs. These memories become the foundation for how we make sense of the world.

This is true for all media, including social media. A study in 2018 (Birkner & Donk) found that “alternative realities” can be formed through social media to run counter to collective memories formed from mainstream media. Often, these collective memories formed through social media are polarized by nature and are adopted by outlier fringes to justify extreme beliefs and viewpoints. This shows that collective memories are not frozen in time but are malleable – continually being rewritten by different media platforms.

Like most things mediated by technology, collective memories are splintering into smaller and smaller groupings, just like the media that are instrumental in their formation.

Media: The Midpoint of the Stories that Connect Us

I’m in the mood for navel gazing: looking inward.

Take the concept of “media,” for instance. Based on the masthead above this post, it’s what this site — and this editorial section — is all about. I’m supposed to be on the “inside” when it comes to media.

But media is also “inside” — quite literally. The word means “middle layer,” so it’s something in between.

There is a nuance here that’s important. Based on the very definition of the word, it’s something equidistant from both ends. And that introduces a concept we in media must think about: We have to meet our audience halfway. We cannot take a unilateral view of our function.

When we talk about media, we have to understand what gets passed through this “middle layer.” Is it information? Well, then we have to decide what information is. Again, the etymology of the word “inform” shows us that informing someone is to “give form to their mind.” But that mind isn’t a blank slate or a lump of clay to be molded as we want. There is already “form” there. And if, through media, we are meeting them halfway, we have to know something about what that form may be.

We come back to this: Media is the midpoint between what we, the tellers,  believe, and what we want our audience to believe. We are looking for the shortest distance between those two points. And, as self-help author Patti Digh wrote, “The shortest distance between two people is a story.”

We understand the world through stories — so media has become the platform for the telling of stories. Stories assume a common bond between the teller and the listener. It puts media squarely in the middle ground that defines its purpose, the point halfway between us. When we are on the receiving end of a story, our medium of choice is the one closest to us, in terms of our beliefs and our world narrative. These media are built on common ideological ground.

And, if we look at a recent study that helps us understand how the brain builds models of the things around us, we begin to understand the complexity that lies within a story.

This study from the Max Planck Institute for Human Cognitive and Brain Sciences shows that our brains are constantly categorizing the world around us. And if we’re asked to recognize something, our brains have a hierarchy of concepts that it will activate, depending on the situation. The higher you go in the hierarchy, the more parts of your brain that are activated.

For example, if I asked you to imagine a phone ringing, the same auditory centers in your brain that activate when you actually hear the phone would kick into gear and give you a quick and dirty cognitive representation of the sound. But if I asked you to describe what your phone does for you in your life, many more parts of your brain would activate, and you would step up the hierarchy into increasingly abstract concepts that define your phone’s place in your own world. That is where we find the “story” of our phone.

As psychologist Robert Epstein  says in this essay, we do not process a story like a computer. It is not data that we crunch and analyze. Rather, it’s another type of pattern match, between new information and what we already believe to be true.

As I’ve said many times, we have to understand why there is such a wide gap in how we all interpret the world. And the reason can be found in how we process what we take in through our senses.

The immediate sensory interpretation is essentially a quick and dirty pattern match. There would be no evolutionary purpose to store more information than is necessary to quickly categorize something. And the fidelity of that match is just accurate enough to do the job — nothing more.

For example, if I asked you to draw a can of Coca-Cola from memory, how accurate do you think it would be? The answer, proven over and over again, is that it probably wouldn’t look much like the “real thing.”

That’s coming from one sense, but the rest of your senses are just as faulty. You think you know how Coke smells and tastes and feels as you drink it, but these are low fidelity tags that act in a split second to help us recognize the world around us. They don’t have to be exact representations because that would take too much processing power.

But what’s really important to us is our “story” of Coke. That was clearly shown in one of my favorite neuromarketing studies, done at Baylor University by Read Montague.

He and his team reenacted the famous Pepsi Challenge — a blind taste test pitting Coke against Pepsi. But this time, they scanned the participant’s brains while they were drinking. The researchers found that when Coke drinkers didn’t know what they were drinking, only certain areas of their brains activated, and it didn’t really matter if they were drinking Coke or Pepsi.

But when they knew they were drinking Coke, suddenly many more parts of the brain started lighting up, including the prefrontal cortex, the part of the brain that is usually involved in creating our own personal narratives to help us understand our place in the world.

And while the actual can of Coke doesn’t change from person to person, our Story of Coke can be an individual to us as our own fingerprints.

We in the media are in the business of telling stories. This post is a story. Everything we do is a story. Sometimes they successfully connect with others, and sometimes they don’t. But in order to make effective use of the media we chose as a platform, we must remember we can only take a story halfway. On the other end there is our audience, each of whom has their own narratives that define them. Media is the middle ground where those two things connect.

Our Disappearing Attention Spans

Last week, Mediapost Editor in Chief Joe Mandese mused about our declining attention spans. He wrote, 

“while in the past, the most common addictive analogy might have been opiates — as in an insatiable desire to want more — these days [consumers] seem more like speed freaks looking for the next fix.”

Mandese cited a couple of recent studies, saying that more than half of mobile users tend to abandon any website that takes longer than three seconds to load. That

“has huge implications for the entire media ecosystem — even TV and video — because consumers increasingly are accessing all forms of content and commerce via their mobile devices.”

The question that begs to be asked here is, “Is a short attention span a bad thing?” The famous comparison is that we are now more easily distracted than a goldfish. But does a shorter attention span negatively impact us, or is it just our brain changing to be a better fit with our environment?

Academics have been debating the impact of technology on our ability to cognitively process things for some time. Journalist Nicholas Carr sounded the warning in his 2010 book, “The Shallows,” where he wrote, 

” (Our brains are) very malleable, they adapt at the cellular level to whatever we happen to be doing. And so the more time we spend surfing, and skimming, and scanning … the more adept we become at that mode of thinking.”

Certainly, Carr is right about the plasticity of our brains. It’s one of the most advantageous features about them. But is our digital environment forever pushing our brains to the shallow end of the pool? Well, it depends. Context is important. One of the biggest factors in determining how we process the information we’re seeing is the device where we’re seeing it.

Back in 2010, Microsoft did a large-scale ethnographic study on how people searched for information on different devices. The researchers found those behaviors differed greatly depending on the platform being used and the intent of the searcher. They found three main categories of search behaviors:

  • Missions are looking for one specific answer (for example, an address or phone number) and often happen on a mobile device.
  • Excavations are widespread searches that need to combine different types of information (for example, researching an upcoming trip or major purchase). They are usually launched on a desktop.
  • Finally, there are Explorations: searching for novelty, often to pass the time. These can happen on all types of devices and can often progress through different devices as the exploration evolves. The initial search may be launched on a mobile device, but as the user gets deeper into the exploration, she may switch to a desktop.

The important thing about this research was that it showed our information-seeking behaviors are very tied to intent, which in turn determines the device used. So, at a surface level, we shouldn’t be too quick to extrapolate behaviors seen on mobile devices with certain intents to other platforms or other intents. We’re very good at matching a search strategy to the strengths and weaknesses of the device we’re using.

But at a deeper level, if Carr is right (and I believe he is) about our constant split-second scanning of information to find items of interest making permanent changes in our brains, what are the implications of this?

For such a fundamentally important question, there is a small but rapidly growing body of academic research that has tried to answer it. To add to the murkiness, many of the studies done contradict each other. The best summary I could find of academia’s quest to determine if “the Internet is making us stupid” was a 2015 article in academic journal The Neuroscientist.

The authors sum up by essentially saying both “yes” — and “no.” We are getting better at quickly filtering through reams of information. We are spending fewer cognitive resource memorizing things we know we can easily find online, which theoretically leaves those resources free for other purposes. Finally, for this post, I will steer away from commenting on multitasking, because the academic jury is still very much out on that one.

But the authors also say that 

“we are shifting towards a shallow mode of learning characterized by quick scanning, reduced contemplation and memory consolidation.”

The fact is, we are spending more and more of our time scanning and clicking. There are inherent benefits to us in learning how to do that faster and more efficiently. The human brain is built to adapt and become better at the things we do all the time. But there is a price to be paid. The brain will also become less capable of doing the things we don’t do as much anymore. As the authors said, this includes actually taking the time to think.

So, in answer to the question “Is the Internet making us stupid?,” I would say no. We are just becoming smart in a different way.

But I would also say the Internet is making us less thoughtful. And that brings up a rather worrying prospect.

As I’ve said many times before, the brain thinks both fast and slow. The fast loop is brutally efficient. It is built to get stuff done in a split second, without having to think about it. Because of this, the fast loop has to be driven by what we already know or think we know. Our “fast” behaviors are necessarily bounded by the beliefs we already hold. It’s this fast loop that’s in control when we’re scanning and clicking our way through our digital environments.

But it’s the slow loop that allows us to extend our thoughts beyond our beliefs. This is where we’ll find our “open minds,” if we have such a thing. Here, we can challenge our beliefs and, if presented with enough evidence to the contrary, willingly break them down and rebuild them to update our understanding of the world. In the sense-making loop, this is called reframing.

The more time we spend “thinking fast” at the expense of “thinking slow,” the more we will become prisoners to our existing beliefs. We will be less able to consolidate and consider information that lies beyond those boundaries. We will spend more time “parsing” and less time “pondering.” As we do so, our brains will shift and change accordingly.

Ironically, our minds will change in such a way to make it exceedingly difficult to change our minds.

The Joy of Listening to Older People

The older I get, the more I enjoy talking to people who have accumulated decades of life experience. I consider it the original social media: the sharing of personal oral histories.

People my age often become interested in their family histories. When you talk to these people, they always say the same thing: “I wish I had taken more time to talk to my grandparents when they were still alive.” No one has ever wished they had spent less time with Grandma and Grandpa.

In the hubris of youth, there seems to be the common opinion that there couldn’t be anything of interest in the past that stretches further than the day before yesterday.  When we’re young, we seldom look back. We live in the moment and are obsessed with the future.

This is probably as it should be. Most of our lives lie in front of us. But as we pass the middle mark of our own journey, we start to become more reflective. And as we do so, we realize that we’ve missed the opportunity to hear most of our own personal family histories from the people who lived it. Let’s call it ROMO: The Regret of Missing Out.

Let me give you one example. In our family, with Remembrance Day (the Canadian version of Veterans Day) fast approaching, one of my cousins asked if we knew of any family that served in World War I. I vaguely remembered that my great grandfather may have served, so I did some digging and eventually found all his service records.

I discovered that he enlisted to go overseas when he was almost 45 years old, leaving behind a wife and five children. He served as a private in the trenches in the Battle of the Somme and Vimy Ridge. He was gassed. He had at least four bouts of trench fever, which is transmitted by body lice.

As a result, he developed a debilitating soreness in his limbs and back that made it impossible for him to continue active duty. Two and a half years after he enlisted, this almost 50-year-old man was able to sail home to his wife and family.

I was able to piece this together from the various records and medical reports. But I would have given anything to be able to hear these stories from him.

Unfortunately, I never knew him. My mom was just a few years old when he died, a somewhat premature death that was probably precipitated by his wartime experience.

This was a story that fell through the cracks between the generations. And now it’s too late. It will remain mostly hidden, revealed only by the sparse information we can glean from a handful of digitized records.

It’s not easy to get most older people talking. They’re not used to people caring about their past or their stories. You have to start gently and tease it out of them.

But if you persist and show an eagerness to listen, eventually the barriers come down and the past comes tumbling out, narrated by the person who lived it. Trust me when I say there is nothing more worthwhile that you can do.

We tend to ignore old people because we just have too much going on in our own lives. But it kills me just a little bit inside when I see grandparents and grandchildren in the same room, the young staring at a screen and the old staring off into space because no one is talking to them.

The screen will always be there. But Grandma isn’t getting any younger. She has lived her life. And I guarantee that in the breadth and depth of that life, there are some amazing stories you should take some time to listen to.

The Difference Between a Right-Wing and Left-Wing Media Brain

I’ve been hesitating to write this column. But increasingly, everything I write and think about seems to come back to the same point – the ideological divide between liberals and conservatives. That divide is tearing the world apart. And technology seems to be accelerating the forces causing the rift, rather than reversing them.

First, a warning: I am a Liberal. That probably doesn’t come as a surprise to anyone who has read any of my columns, but I did want to put it out there. And the reason I feel that warning is required it that with this column, I’m diving into the dangerous waters – I’m going to be talking about the differences between liberal and conservative brains, particularly those brains that are working in the media space.

Last week, I talked about the evolution of media bias through two – and what seems increasingly likely – three impeachment proceedings. Mainstream media has historically had a left bias. In a longitudinal study of journalism,  two professors at University of Indiana – Lars Willnat and David Weaver – found that in 2012, just 7% of American journalists identified themselves as Republican, while 28% said they were Democrats. Over 50% said they were Independent, but I suspect this is more a statement on the professed objectivity of journalists than their actual political leanings. I would be willing to bet that those independents sway left far more often than they sway right.

So, it’s entirely fair to say that mainstream media does have liberal bias. The question is – why? Is it a premediated conspiracy or just a coincidental correlation? I believe the bias is actually self-selected. Those that choose to go into journalism have brains that work in a particular way – a way that is most often found in those that fall on the liberal end of the spectrum.

I first started putting this hypothesis together when I read the following passage in Robert Sapolsky’s book “Behave, The Biology of Humans at Our Best and Worst.” Sapolsky was talking about a growing number of studies looking at the cognitive differences between liberals and conservatives: “This literature has two broad themes. One is that rightists are relatively uncomfortable intellectually with ambiguity…The other is that leftists, well, think harder, have a greater capacity for what the political scientist Philip Tetlock of the University of Pennsylvania calls ‘integrative complexity’.”

Sapolsky goes on to differentiate these intellectual approaches, “conservatives start gut and stay gut; liberals go from gut to head.”

Going from “gut to head” is a pretty good quality for a journalist. In fact, you could say it’s their job description.

Sapolsky cites a number of studies he bases this conclusion on. In the abstract of one of these studies, the researchers note: “Liberals are more likely to process information systematically, recognize differences in argument quality, and to be persuaded explicitly by scientific evidence, whereas conservatives are more likely to process information heuristically, attend to message-irrelevant cues such as source similarity, and to be persuaded implicitly through evaluative conditioning. Conservatives are also more likely than liberals to rely on stereotypical cues and assume consensus with like-minded others.”

This is about as good a description of the differences between mainstream media and the alt-right media as I’ve seen. The researchers further note that, “Liberals score higher than conservatives on need for cognition and open-mindedness, whereas conservatives score higher than liberals on intuitive thinking and self-deception.”

That explains so much of the current situation we’re finding ourselves in. Liberals tend to be investigative journalists. Conservatives tend to be opinion columnists and pundits. One is using their head. The other is using their gut.

Of course, it’s not just the conservative media that rely on gut instinct. The Commander in Chief uses the same approach. In a 2016 article in the Washington Post, Marc Fisher probed Trump’s disdain for reading, “He said in a series of interviews that he does not need to read extensively because he reaches the right decisions “with very little knowledge other than the knowledge I [already] had, plus the words ‘common sense,’ because I have a lot of common sense and I have a lot of business ability.”

I have nothing against intuition. The same Post articles goes on to give examples of other presidents who relied on gut instinct (Fisher notes, however; that even when these are factored in, Trump is still an outlier). But when the stakes are as high as they are now, I prefer intuition combined with some research and objective evaluation.

We believe in the concept of equality and fairness, as we should. For that reason, I hesitate to put yet another wall between conservatives and liberals. But – in seeking answers to complex questions – I think we have to be open and honest about the things that make us different. There is a reason some of us are liberals and some of us are conservatives – our brains work differently*. And when those differences extend to our processing of our respective realities and the sources we turn to for information, we should be aware of them. We should take them into account in evaluating our media choices. We should go forward with open minds.

Unfortunately, I suspect I’m preaching to the choir. If you got this far in my column, you’re probably a liberal too.

* If you really want to dig further, check out the paper “Are Conservatives from Mars and Liberals from Venus?, Maybe Not So Much by Linda Skitka, one of the foremost researchers exploring this question.

Photos: Past, Present and Future

I was at a family reunion this past week. While there, my family did what families do at reunions: We looked at family photos.

In our case, our photographic history started some 110 years or so ago, with my great-great grandfather George and his wife Kezia. We have a stunning  picture of the couple, with Kezia wearing an ostrich feather hat.

George and Kezia Ching – Redondo Beach

At the time of the photo, George was an ostrich feather dyer in Hollywood, California. Apparently, there was a need for dyed ostrich feathers in turn-of-the-century Hollywood. That need didn’t last for long. The bottom fell out of the ostrich feather market and George and Kezia turned their sights north of the 49th, high-tailing it for Canada.

We’re a lucky family. We have four generations of photographic evidence of my mother’s forebears. They were solidly middle class and could afford the luxury of having a photo taken, even around the turn of the century. There were plenty of preserved family images that fueled many conversations and sparked memories as we gathered the clan.

What was interesting to me is that some 110 years after this memorable portrait was taken, we also took many new photos so we could remember this reunion in the future.  With all the technological change that has happened since George and Kezia posed in all their ostrich-feather-accessorized finery, the basic format of a two-dimensional visual representation was still our chosen medium for capturing the moment.

We talk about media a lot here at MediaPost — enough that it’s included in the headline of the post you’re reading. I think it’s worth a quick nod of appreciation to media that have endured for more than a century. Books and photos both fall into this category. Great-Great Grandfather George might be a bit flustered if he was looking at a book on a Kindle or viewing the photo on an iPhone, but the format of the medium itself would not be that foreign to him. He would be able to figure it out.

What dictates longevity in media? I think we have an inherent love for media that are a good match for both our senses and our capacity to imagine. Books give us the cognitive room to imagine worlds that no CGI effect has yet been able to match. And a photograph is still the most convenient way to render permanent the fleeting images that chase across our visual cortex. This is all the more true when those images are comprised of the faces we love. Like books, photos also give our minds the room to fill in the blanks, remembering the stories that go with the static image.

Compare a photo to something like a video. We could easily have taken videos to capture the moment. All of has had a pretty good video camera in our pocket. But we didn’t. Why not?

Again, we have to look at intended purpose at the moment of future consumption. Videos are linear. They force their own narrative arc upon us. We have to allocate the time required to watch the video to its conclusion. But a photo is randomly accessed. Our senses consume it at their own pace and prerogative, free of the restraints of the medium itself. For things like communal memories at a family reunion, a photo is the right match. There are circumstances where a video would be a better fit. This wasn’t one of them.

Our Family – 2019

There is one thing about photos that will be different moving forward. They are now in the digital domain, which means they can be stored with no restraints on space. It also means that we can take advantage of appended metadata. For the sake of my descendants, I hope this makes the bond between the photo and the stories a little more durable than what we currently deal with. If we were lucky, we had a quick notation on the back of an old photo to clarify the whos, whens and wheres.

A few of my more archivally inclined cousins started talking about the future generations of our family. When they remember us, what media would they be using? Would they be looking at the many selfies and digital shots that were taken in 2019 and try to remember who was that person between Cousin Dave and Aunt Lorna? What would be the platform used to store the photos? What will be the equivalent of the family album in 2119? How will they be archiving their own memories?

I suspect that if I were there, I wouldn’t be that surprised at the medium of choice.