'Twas the Night Before the Internet

Today, just one day before Christmas, my mind swings to the serendipitous side. I don’t know about you, but for me, 2019 has been a trying year. While you would never know it by the collection of columns I’ve produced over the past 12 months, I have tried to find the glimpses of light in the glowering darkness.

Serendipity Sidetrack #1: “Glowering” is a word we don’t use much anymore. It refers to someone who has a dark, angry expression on their faces. As such, it’s pretty timely and relevant. You’d think we would use it more.

One of my personal traditions during the holidays is to catch one of the fourteen billion airings of “It’s a Wonderful Life.” Yes, it’s quintessentially Capraesque. Yes, it’s corny as hell. But give me a big seasonal heaping helping of Jimmy Stewart, Donna Reed and that “crummy little town” known as Bedford Falls.

Serendipity Sidetrack #2: The movie “It’s a Wonderful Life” is based on a 1939 short story by Philip Van Doren Stern. He tried to get it published for several years with no success. He finally self-published it and sent it to 200 friends as a 24-page Christmas card. One of these cards ended up on the desk of an executive at RKO pictures, who convinced the studio to buy the rights in 1943 as a vehicle for its star Cary Grant.

That movie never got made and the project was shelved for the rest of World War II. After the war, director Frank Capra read the script and chose it as his first Hollywood movie after making war documentaries and training films.

The movie was panned by critics and ignored by audiences. It was a financial disaster, eventually leading to the collapse of Capra’s new production company, Liberty Films. One other stray tidbit: during the scene at the high school dance where the gym floor opens over the pool (which was shot at Beverly Hills High School), Mary’s obnoxious date Freddie is played by an adult Carl “Alfalfa” Switzer, from the “Our Gang” series.

But I digress. This seasonal ritual got me thinking along “what if” lines. We learn what Bedford Falls would be like if George Bailey was never born. But maybe the same narrative machinery could be applied to another example: What would Christmas (or your seasonal celebration of choice) be like if the Internet had never happened?

As I pondered this, I realized that there’s really only one aspect of the internet that materially impacts what the holidays have become. These celebrations revolve around families, so if we were going to look for changes wrought by technology, we have to look at the structure and dynamics of the family unit.

Serendipity Sidetrack #3: Christmas was originally not a family-based celebration. It became so in Victorian England thanks to Queen Victoria, Prince Albert and Charles Dickens. After the marriage of the royal couple, Albert brought the German tradition of the Christmas tree to Windsor Castle. Pictures of the royals celebrating with family around the tree firmly shifted the holiday towards its present warm-hearted family center.

In 1843, Dickens added social consciousness to the party with the publication of “A Christmas Carol.” The holiday didn’t take its detour towards overt consumerism until the prosperity of the 1950s.

But back to my rapidly unraveling narrative thread: What would Christmas be like without the Internet?

I have celebrated Christmas in two different contexts: The first, in my childhood and the second with my own wife and family.

I grew up with just my immediate family in rural Alberta, geographically distant from aunts, uncles and cousins. For dinner there would be six of us around the table. We might try to call an aunt or uncle who lived some 2,000 miles away, but usually the phone lines were so busy we couldn’t get through.

The day was spent with each other and usually involved a few card games, a brief but brisk walk and getting ready for Christmas dinner. It was low-key, but I still have many fond memories of my childhood Christmases.

Then I got married. My wife, who is Italian, has dozens and dozens and dozens of relatives within a stone’s throw in any direction. For us, Christmas is now a progressive exercise to see just how many people can be crammed into the same home. It begins at our house for Christmas morning with the “immediate” family (remember, I use the term in its Italian context). The head count varies between 18 and 22 people.

Then, we move to Christmas dinner with the “extended” family. The challenge here is finding a house big enough, because we are now talking 50 to 75 people. It’s loud, it’s chaotic — and I couldn’t imagine Christmas any other way.

The point here is how the Internet has shifted the nature of the celebration. In my lifespan, I have seen two big shifts, both to do with the nature of our personal connections. And like most things with technology, one has been wonderful while the other has been troubling.

First of all, thanks to the Internet, we can extend our family celebrations beyond the limits of geography. I can now connect with family members who don’t live in the same town.

But, ironically, the same technology has been eroding the bonds we have with the family we are physically present with. We may be in the same room, but our minds are elsewhere, preoccupied with the ever-present screens in our pockets or purses. In my pre-Internet memories of Christmas, we were fully there with our families. Now, this is rarely the case.

And one last thought. I find — sadly — that Christmas is just one more occasion to be shared through social media. For some of us, it’s not so much who we’re with or what we’re doing, but about how it will look in our Instagram post.

The Ruts of Our Brain

We are not – by nature – open minded. In fact, as we learn something, the learning creates neural pathways in our brain that we tend to stick to. In other words, the more we learn, the bigger the ruts get.

Our brains are this way by design. At its core, the brain is an energy saving device. If there are two options open to it, one requiring more cognitive processing and one requiring less, the brain will default to the less resource intensive option.

This puts expertise into an interesting new perspective. In a recent study, researchers from Cold Spring Harbor Laboratory, Columbia University, University College London and Flatiron Institute found that when mice learn a new task, the neurons in their brain actually change as they move from being a novice to an expert. At the beginning as they’re learning the task, the required neurons don’t “fire” until the brain makes a decision. But, as expertise is gained, those same neurons start responding before they’re even needed. It’s essentially Hebbian Theory (named after neurologist Donald Hebbs) in action: the neurons that fire together eventually wire together.

We tend to think of experts as bringing a well-honed subset of intellectual knowledge to a question. And that is true, as long as the question is well within their area of expertise. But the minute an expert ventures outside of their “rut” they begin to flounder. In fact, even when they are in their area of expertise but are asked to predict where that path that may lead in the future – beyond their current rut – their expertise doesn’t help them. In 2005 psychologist Phillip Tetlock published “Expert Political Judgement” – a book showing the results of a 20-year long study on the prediction track record of experts. It wasn’t good. According to a New Yorker review of the book, “Human beings who spend their lives studying the state of the world…are poorer forecasters than dart-throwing monkeys”

Why? Well, just like those mice in the above-mentioned study, once we have a rut, our brains like to stick to the rut. It’s just easier for us. And experts have very deep ruts. The deeper the rut, the more effort it takes to peer above it. As Tetlock found, when it comes to predicting what might happen in some area in the future, even if you happen to be an expert in that area, you’d probably be better off flipping a coin than relying on your brain.

By the way, for most of human history, this has been a feature, not a bug. Saving cognitive energy is a wonderful evolutionary advantage. If you keep doing the same thing over and over, eventually the brain pre-lights the neuronal path required, saving itself time and energy. The brain is directing anticipated traffic at faster than the speed of thought. And it’s doing it so well, it would take a significant amount of cognitive horsepower to derail this action.

Like I said, in a fairly predictably world of cause and effect, this system works. But in an uncertain world full of wild card complexity, it can be crippling.

Complex worlds require Foxes, not Hedgehogs. This analogy also comes from Tetlock’s book. According to an old Greek fable, “The fox knows many things but the hedgehog knows just one thing.” To that I would add; the fox knows a little about many things, but the hedgehog knows a lot about one thing. In other words, the hedgehog is an expert.

In Tetlock’s study, people with “fox” qualities had a significantly better track record then “hedgehogs” when it came to predicting the future. Their brains were better able to take the time to synthesize the various data inputs required to deal with the complexity of crystal balling the future because they weren’t barrelling down a pre-ordained path that had been carved by years of accumulated expertise.

But it’s not just expertise that creates these ruts in our brains. The same pattern plays out when we look at the impact of our beliefs play in how open-minded we are. The stronger the belief, the deeper the rut.

Again, we have to remember that this tendency of our brains to form well-travelled grooves over time has been crafted by the blind watchmaker of evolution. But that doesn’t make it any less troubling when we think about the limitations it imposes in a more complex world. This is especially true when new technologies deliberately leverage our vulnerability in this area. Digital platforms ruthlessly eliminate the real estate that lies between perspectives. The ideological landscape in which foxes can effectively operate is disappearing. Increasingly we grasp for expertise – whether it’s on the right or left of any particular topic – with the goal of preserving our own mental ruts.

And as the ruts get deeper, foxes are becoming an endangered species.

Just in Time for Christmas: More Search Eye-Tracking

The good folks over at the Nielsen Norman Group have released a new search eye tracking report. The findings are quite similar to one my former company — Mediative — did a number of years ago (this link goes to a write-up about the study. Unfortunately, the link to the original study is broken. *Insert head smack here).

In the Nielsen Norman study, the two authors — Kate Moran and Cami Goray — looked at how a more visually rich and complex search results page would impact user interaction with the page. The authors of the report called the sum of participant interactions a “Pinball Pattern”: “Today, we find that people’s attention is distributed on the page and that they process results more nonlinearly than before. We observed so much bouncing between various elements across the page that we can safely define a new SERP-processing gaze pattern — the pinball pattern.”

While I covered this at some length when the original Mediative report came out in 2014 (in three separate columns: 1,2 & 3), there are some themes that bear repeating. Unfortunately, I found the study’s authors missed what I think are some of the more interesting implications. 

In the days of the “10 Blue Links” search results page, we used the same scanning strategy no matter what our intent was. In an environment where the format never changes, you can afford to rely on a stable and consistent strategy. 

In our first eye tracking study, published in 2004, this consistent strategy led to something we called the Golden Triangle. But those days are over.

Today, when every search result can look a little bit different, it comes as no surprise that every search “gaze plot” (the path the eyes take through the results page) will also be different. Let’s take a closer look at the reasons for this. 

SERP Eye Candy

In the Nielsen Norman study, the authors felt “visual weighting” was the main factor in creating the “Pinball Pattern”: “The visual weight of elements on the page drives people’s scanning patterns. Because these elements are distributed all over the page and because some SERPs have more such elements than others, people’s gaze patterns are not linear. The presence and position of visually compelling elements often affect the visibility of the organic results near them.”

While the visual impact of the page elements is certainly a factor, I think it’s only part of the answer. I believe a bigger, and more interesting, factor is how the searcher’s brain and its searching strategies have evolved in lockstep with a more visually complex results page. 

The Importance of Understanding Intent

The reason why we see so much variation in scan patterns is that there is also extensive variation in searchers’ intent. The exact same search query could be used by someone intent on finding an online or physical place to purchase a product, comparing prices on that product, looking to learn more about the technical specs of that product, looking for how-to videos on the use of the product, or looking for consumer reviews on that product.

It’s the same search, but with many different intents. And each of those intents will result in a different scanning pattern. 

Predetermined Page Visualizations

I really don’t believe we start each search page interaction with a blank slate, passively letting our eyes be dragged to the brightest, shiniest object on the page. I think that when we launch the search, our intent has already created an imagined template for the page we expect to see. 

We have all used search enough to be fairly accurate at predicting what the page elements might be: thumbnails of videos or images, a map showing relevant local results, perhaps a Knowledge Graph result in the lefthand column. 

Yes, the visual weighting of elements act as an anchor to draw the eye, but I believe the eye is using this anticipated template to efficiently parse the results page. 

I have previously referred to this behavior as a “chunking” of the results page. And we already have an idea of what the most promising chunks will be when we launch the search. 

It’s this chunking strategy that’s driving the “pinball” behavior in the Nielsen Norman study.  In the Mediative study, it was somewhat surprising to see that users were clicking on a result in about half the time it took in our original 2005 study. We cover more search territory, but thanks to chunking, we do it much more efficiently.

One Last Time: Learn Information Scent

Finally, let me drag out a soapbox I haven’t used for a while. If you really want to understand search interactions, take the time to learn about Information Scent and how our brains follow it (Information Foraging Theory — Pirolli and Card, 1999 — the link to the original study is also broken. *Insert second head smack, this one harder.). 

This is one area where the Nielsen Norman Group and I are totally aligned. In 2003, Jakob Nielsen — the first N in NNG — called the theory “the most important concept to emerge from human-computer interaction research since 1993.”

On that we can agree.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

The Hidden Agenda Behind Zuckerberg’s “Meaningful Interactions”

It probably started with a good intention. Facebook – aka Mark Zuckerberg – wanted to encourage more “Meaningful Interactions”. And so, early last year, Facebook engineers started making some significant changes to the algorithm that determined what you saw in your News Feed. Here are some excerpts from Zuck’s post to that effect:

“The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos — even if they’re entertaining or informative — may not be as good.”

That makes sense, right? It sounds logical. Zuckerberg went on to say how they were changing Facebook’s algorithm to encourage more “Meaningful Interactions.”

“The first changes you’ll see will be in News Feed, where you can expect to see more from your friends, family and groups.

As we roll this out, you’ll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard — it should encourage meaningful interactions between people.”


Let’s fast-forward almost two years and we now see the outcome of that good intention…an ideological landscape with a huge chasm where the middle ground used to be.

The problem is that Facebook’s algorithm naturally favors content from like-minded people. And surprisingly, it doesn’t take a very high degree of ideological homogeneity to create a highly polarized landscape. This shouldn’t have come as a surprise. American Economist Thomas Schelling showed us how easy it was for segregation to happen almost 50 years ago.

The Schelling Model of Segregation was created to demonstrate why racial segregation was such a chronic problem in the U.S., even given repeated efforts to desegregate. The model showed that even when we’re pretty open minded about who our neighbors are, we will still tend to self-segregate over time.

The model works like this. A grid represents a population with two different types of agents: X and O. The square that the agent is in represents where they live. If the agent is satisfied, they will stay put. If they aren’t satisfied, they will move to a new location. The variable here is the level of satisfaction determined by what percentage of their immediate neighbours are the same type of agent as they are. For example, the level of satisfaction might be set at 50%; where the X agent needs at least 50% of its neighbours to also be of type X. (If you want to try the model firsthand, Frank McCown, a Computer Science professor at Harding University, created an online version.)

The most surprising thing that comes out of the model is that this threshold of satisfaction doesn’t have to be set very high at all for extensive segregation to happen over time. You start to see significant “clumping” of agent types at percentages as low as 25%. At 40% and higher, you see sharp divides between the X and O communities. Remember, even at 40%, that means that Agent X only wants 40% of their neighbours to also be of the X persuasion. They’re okay being surrounded by up to 60% Os. That is much more open-minded than most human agents I know.

Now, let’s move the Schelling Model to Facebook. We know from the model that even pretty open-minded people will physically segregate themselves over time. The difference is that on Facebook, they don’t move to a new part of the grid, they just hit the “unfollow” button. And the segregation isn’t physical – it’s ideological.

This natural behavior is then accelerated by the Facebook “Meaningful Encounter” Algorithm which filters on the basis of people you have connected with, setting in motion an ever-tightening spiral that eventually restricts your feed to a very narrow ideological horizon. The resulting cluster then becomes a segment used for ad targeting. We can quickly see how Facebook both intentionally built these very homogenous clusters by changing their algorithm and then profits from them by providing advertisers the tools to micro target them.

Finally, after doing all this, Facebook absolves themselves of any responsibility to ensure subversive and blatantly false messaging isn’t delivered to these ideologically vulnerable clusters. It’s no wonder comedian Sascha Baron Cohen just took Zuck to task, saying “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’”. 

In rereading Mark Zuckerberg’s post from two years ago, you can’t help but start reading between the lines. First of all, there is mounting evidence that disproves his contention that meaningful social media encounters help your well-being. It appears that quitting Facebook entirely is much better for you.

And secondly, I suspect that – just like his defence of running false and malicious advertising by citing free speech – Zuck has an not-so-hidden agenda here. I’m sure Zuckerberg and his Facebook engineers weren’t oblivious to the fact that their changes to the algorithm would result in nicely segmented psychographic clusters that would be like catnip to advertisers – especially political advertisers. They were consolidating exactly the same vulnerabilities that were exploited by Cambridge Analytica.

They were building a platform that was perfectly suited to subvert democracy.

Running on Empty: Getting Crushed by the Crush It Culture

“Nobody ever changed the world on 40 hours a week.”

Elon Musk

Those damned Protestants and their work ethic. Thanks to them, unless you’re willing to put in a zillion hours a week, you’re just a speed bump on the road to all that is good in the world. Take Mr. Musk, for example. If you happen to work at Tesla, or SpaceX, or the Boring Company, Elon has figured out what your average work week should be, “(It) Varies per person, but about 80 sustained, peaking above 100 at times. Pain level increases exponentially above 80.”

“Pain level increases exponentially above 8o”? WTF, Mr. Musk!

But he’s not alone. Google famously built their Mountainview campus so employees never had to go home. Alibaba Group founder Jack Ma calls the intense work culture at his company a “huge blessing.” He calls it the “996” work schedule, 9 am to 9 pm 6 days a week. That’s 72 hours, if you’re counting. But even that wouldn’t cut it if you work for Elon Musk. You’d be a dead beat.

This is the “Crush It” culture, where long hours equate to dedication and – by extension – success. No pain, no gain.

We spend lots of time talking about the gain – so let me spend just one column talking about the pain. Pain such as mental illness, severe depression, long term disabilities and strokes. Those that overwork are more likely to over-eat, smoke, drink excessively and develop other self-destructive habits.

You’re not changing the world. You’re shortening your life. The Japanese call it karoshi; death by overwork.

Like so many things, this is another unintended consequence of a digitally mediated culture. Digital speeds everything up. But our bodies – and brains – aren’t digital. They burn out if they move too fast – or too long.

Overwork as a sign of superior personal value is a fairly new concept in the span of human history. It came from the Puritans who settled in New England. They believed that those that worked hard at their professions were those chosen to get into heaven. The more wealth you amassed from your work, the more evidence there was that you were one of the chosen.

Lately, the creeping Capitalist culture of over-working has most firmly embedded itself in the tech industry. There, the number of hours you work has become a proxy of your own worth. A twisted type of machismo has evolved and has trapped us all into thinking that an hour not spent at our jobs is an hour wasted. We are looked down upon for wanting some type of balance in our lives.

Unfortunately for the Musks and Mas and other modern-day task masters – the biology just doesn’t support their proposed work schedules.

First, our brains need rest. Back in the 18th century when those Puritans proved their worth through work, earning a living was usually a physical endeavour. The load of overwork was spread amongst the fairly simple mechanical machinery of our own bodies. Muscles got sore. Joints ached. But they recovered.

The brain is a much more complex beast. When it gets overworked, it loses its executive ability to focus on the task at hand. When your work takes place on a desktop or laptop where there are unlimited diversions just a click away, you suddenly find yourself 45 minutes into an unplanned YouTube marathon or scrolling through your Facebook feed. It becomes a downward spiral that benefits no one.

An overworked mind also loses its ability to spin down in the evening so you can get an adequate amount of sleep. When your co-workers start boasting of being able to function on just 3 or 4 hours of sleep – they are lying. They are lying to you, but worse, they are lying to themselves. Very few of us can function adequately on less than 7 or 8 hours of sleep. For the rest of us, the negative effects start to accumulate. A study found that sleep deprivation has the same impact as drinking too much. Those that were getting less than 7 hours of sleep faired the same or worse on a cognitive test as those that had a 0.05% blood alcohol level. The legal limit in most states is 0.08%.

Finally, in an essay on Medium, Rachel Thomas points out that the Crush It Culture is discriminatory. Those that have a disability or chronic illness simply have fewer hours in the day to devote to work. They need time for medical support and usually require more sleep. In an industry like Tech where there is an unhealthy focus on the number of hours worked, these workers – which Thomas says makes up at least 30% of the total workforce – are shut out.

The Crush It Culture is toxic. The science simply doesn’t support it. The only ones evangelizing it are those that directly benefit from this modernized version of feudalism.  It’s time to call Bullshit on them.

The Tourification of Our World

Who wouldn’t want to be in Venice? Gondolas drift by with Italian gondoliers singing “O Sole Mio.” You sit at a café savoring your espresso as you watch Latin lovers stroll by hand in hand on their way to the Bridge of Sighs. The Piazza San Marco is bathed in a golden glow as the sun sets behind the Basilica di San Marco. The picture? Perfect.  

Again, who wouldn’t want to live in Venice?

The answer, according to the latest population stats, is almost everyone. The population of Venice is one third what it was in 1970.

The sharp-eyed among you may have noticed that I changed the sentence slightly in the second version. I replaced “be” with “live.”  And that’s the difference. Venice is literally the “nice place to visit but I wouldn’t want to live there.”  

A lot of people do visit, well over 5 million a year. But almost nobody lives there. The permanent population of Venice has shrunk to below 60,000.

Venice has become tourified. It’s a false front of a city, one built for those who are going to be there for 48 to 72 hours. In the process, everything needed to make it sustainable for those who want to call it home has been stripped out. It has become addicted to tourist dollars — and that addiction is killing it.

We should learn from Venice’s example. Sometimes, in trying to make a fantasy real, you take away the very things needed to let it survive.

Perfection doesn’t exist in nature. Imperfections are required for robustness. Yet, we are increasingly looking for a picture of perfection we can escape to.

The unintended consequences of this are troubling to think about.

We spent a good part of the last century devising new ways to escape. What was once an activity that lived well apart from our real lives has become increasingly more entwined with those lives.

As our collective affluence has grown, we spend more and more time chasing the fantastical. Social media has accelerated this chase. Our feeds are full of posts from those in pursuit of a fantasy.

We have shifted our focus from the place we live to the “nice place to visit.” This distorts our expectations of what reality should be. We expect the tourist-brochure version of Venice without realizing that in constructing exactly that, we set in motion a chain of events resulting in a city that’s unlivable.

The rise of populist politics is the broken-mirror image of this. Many of us have mythologized the America we want — or Britain, or any of the other countries that have gone down the populist path.. And myths are, by definition, unsustainable in the real world. They are vastly oversimplified pictures that allow us to create a story that we long for. It’s  the same as the picture I painted of Venice in the first paragraph: a fantasy that can’t survive reality.

In our tendency to “tourify” everything, there are at least two unintended consequences: one for ourselves and one for our world.

For us, the need to escape continually draws our energies and attentions from what we need to do to save the world we actually live in, toward the mythologization of the world we think we want to live in. We ignore the inconvenient truths of reality as we pursue our imagined perfection.

But it’s the second outcome that’s probably more troubling. Even if we were successful in building the world we think we want, we could well find that built a bigger version of Venice, a place sinking under the weight of its own fantasy.

Sometimes, you have to be careful what you wish for.