The Saddest Part about Sadfishing

There’s a certain kind of post I’ve always felt uncomfortable with when I see it on Facebook. You know the ones I’m talking about — where someone volunteers excruciatingly personal information about their failing relationships, their job dissatisfaction, their struggles with personal demons. These posts make me squirm.

Part of that feeling is that, being of British descent, I deal with emotions the same way the main character’s parents are dealt with in the first 15 minutes of any Disney movie: Dispose of them quickly, so we can get on with the business at hand.

I also suspect this ultra-personal sharing  is happening in the wrong forum. So today, I’m trying to put an empirical finger on my gut feelings of unease about this particular topic.

After a little research, I found there’s a name for this kind of sharing: sadfishing. According to Wikipedia, “Sadfishing is the act of making exaggerated claims about one’s emotional problems to generate sympathy. The name is a variation on ‘catfishing.’ Sadfishing is a common reaction for someone going through a hard time, or pretending to be going through a hard time.”

My cynicism towards these posts probably sounds unnecessarily harsh. It goes against our empathetic grain. These are people who are just calling out for help. And one of the biggest issues with mental illness is the social stigma attached to it. Isn’t having the courage to reach out for help through any channel available — even social media — a good thing?

I do believe asking for help is undeniably a good thing. I wish I myself was better able to do that. It’s Facebook I have the problem with. Actually, I have a few problems with it.

It’s Complicated

Problem #1: Even if a post is a genuine request for help, the poster may not get the type of response he or she needs.

Mental Illness, personal grief and major bumps on our life’s journey are all complicated problems — and social media is a horrible place to deal with complicated problems. It’s far too shallow to contain the breadth and depth of personal adversity.

Many read a gut-wrenching, soul-scorching post (genuine or not), then leave a heart or a sad face, and move on. Within the paper-thin social protocols of Facebook, this is an acceptable response. And it’s acceptable because we have no skin in the game. That brings us to problem #2.

Empathy is Wired to Work Face-to-Face

Our humanness works best in proximity. It’s the way we’re wired.

Let’s assume someone truly needs help. If you’re physically with them and you care about them, things are going to get real very quickly. It will be a connection that happens at all possible levels and through all senses.

This will require, at a minimum, hand-holding and, more likely, hugs, tears and a staggering personal commitment  to help this person. It is not something taken or given lightly. It can be life-changing on both sides.

You can’t do it at arm’s length. And you sure as hell can’t do it through a Facebook reply.

The Post That Cried Wolf

But the biggest issue I have is that social media takes a truly genuine and admirable instinct, the simple act of helping someone, and turns it into just another example of fake news.

Not every plea for help on Facebook is exaggerated just for the sake of gaining attention, but some of them are.

Again, Facebook tends to take the less admirable parts of our character and amplify them throughout our network. So, if you tend to be narcissistic, you’re more apt to sadfish. If you have someone you know who continually reaches out through Facebook with uncomfortably personal posts of their struggles, it may be a sign of a deeper personality disorder, as noted in this post on The Conversation.

This phenomenon can create a kind of social numbness that could mask genuine requests for help. For the one sadfishing, It becomes another game that relies on generating the maximum number of social responses. Those of us on the other side quickly learn how to play the game. We minimize our personal commitment and shield ourselves against false drama.

The really sad thing about all of this is that social media has managed to turn legitimate cries for help into just more noise we have to filter through.

But What If It’s Real?

Sadfishing aside, for some people Facebook might be all they have in the way of a social lifeline. And in this case, we mustn’t throw the baby out with the bathwater. If someone you know and care about has posted what you suspect is a genuine plea for help, respond as humans should: Reach out in the most personal way possible. Elevate the conversation beyond the bounds of social media by picking up the phone or visiting them in person. Create a person-to-person connection and be there for them.

Saying So Long to SEMPO

Yesterday afternoon, while I was in line at the grocery store, my phone pinged. I was mentioned in a Twitter post. For me, that’s becoming a pretty uncommon experience. So I checked the post.  And that’s how I found out that SEMPO is no more.

The tweet was from Dana Todd, who was responding to a Search Engine Journalarticle by Roger Montti about the demise of SEMPO. For those of you who don’t know SEMPO: it was the Search Engine Marketing Professionals Organization.

It was a big part of my life during what seems like a lifetime ago. Todd was even more involved. Hence the tweet.

Increasingly I find my remaining half-life in digital consists of an infrequent series of “remember-when” throwbacks. This will be one of those.

Todd’s issue with the article was that much of the 17-year history of the organization was glossed over, as Montti chose to focus mainly on the controversies of the first year or two of its existence.

As Todd said, “You only dredged up the early stages of the organization, in its infancy as we struggled to gain respect and traction, and were beset by naysayers who looked for a reason we should fail. We didn’t fail.”

She then added, “There is far more to the SEMPO story, and far more notable people who put in blood sweat and tears to build not just the organization, but the entire industry.”

I was one of those people. But before that, I was also one of the early naysayers.

SEMPO started in 2003. I didn’t join until 2004. I spent at least part of that first year joining the chorus bitching about the organization. And then I realized that I could either bitch from the outside — or I could effect change from the inside.  

After joining, I quickly found myself on that same SEMPO board that that I’d been complaining about. In 2005, I became co-chair of the research committee. In 2006, I became the chair of SEMPO. I served in that role for two years and eventually stepped down from SEMPO at the same time I stepped away from the search industry.

Like Todd (who was the president of SEMPO for part of the time I was the chairman), I am proud of what we did, and extraordinarily proud of the team that made it happen. Many of the people I admired most in the industry served with me on that board.

Todd will always be one of my favorite search people. But I also had the privilege of serving with Jeff Pruit, Kevin Lee, Bill Hunt, Dave Fall, Christine Churchill and the person who got the SEMPO ball rolling, along with Todd: Barbara Coll. There were many, many others.

Now, SEMPO is being absorbed by the Digital Analytics Association, which, according to its announcement,  “is committed to helping former SEMPO members become fully integrated into DAA, and will be forming a special interest group (SIG) for search analytics.”

I’ve got to admit: That hurts. Being swallowed up, becoming nothing more than a special interest group, is a rather ignoble end for the association I gave so much to.

But as anyone who has raised a child can tell you, you know you’ve been successful when they no longer need you. And that’s how I choose to interpret this event. The search industry no longer needs SEMPO, at least as a stand-alone organization.

And if that’s the case, then SEMPO knocked it out of the park. Because that sure as hell wasn’t true back in 2003.

Search in 2003 was the Wild West. According to legend, there were white-hat SEOs and black-hat SEOs.

But truth be told, most of us wore hats that were some shade of grey.

The gunslingers of natural search (or organic SEO) were slowly and very reluctantly giving up their turf to the encroaching new merchants of paid search. Google Adwords had only been around for three years, but its launch introduced a whole new dynamic to the ecosystem. Google suddenly had to start a relationship with search marketers.

Before that, the only Google attempt made to reach out was thanks to a rogue mystery poster on SEO industry forums named “googleguy” (later suspected to be the search quality team lead Matt Cutts).  To call search an industry would be stretching the term to its breaking point.

The introduction of paid search was creating a two-sided marketplace, and that was forcing search to become more civilized.

The process of civilization is always difficult. It requires the establishment of trust and respect, two commodities that were in desperately short supply in search circa 2003.

SEMPO was the one organization that did the most to bring civilization to the search marketplace. It gave Google a more efficient global conduit to thousands of search marketers. And it gave those search marketers a voice that Google would actually pay some attention to.

But it was more than just starting a conversation. SEMPO challenged search marketers to think beyond their own interests. The organization laid the foundation for a more sustainable and equitable search ecosystem. If SEMPO accomplished anything to be proud of, it was in preventing the Tragedy of the Commons from killing search before it had a chance to establish itself as the fastest growing advertising marketplace in history.

Dana Todd wrapped up her extended Twitter post by writing, “I can say confidently Google wouldn’t be worth $1T without us. SEMPO — you mattered.”

Dana, just like in the old SEMPO days when we double-teamed a message, you said it better than I ever could.

And Google? You’re welcome.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

Why Good Tech Companies Keep Being Evil

You’d think we’d have learned by now. But somehow it still comes as a shock to us when tech companies are exposed as having no moral compass.

Slate recently released what it called the “Evil List”  of 30 tech companies compiled through a ballot sent out to journalists, scholars, analysts, advocates and others. Slate asked them which companies were doing business in the way that troubled them most. Spoiler alert: Amazon, Facebook and Google topped the list.  But they weren’t alone. Rounding out the top 10, the list of culprits included Twitter, Apple, Microsoft and Uber.

Which begs the question: Are tech companies inherently evil — like, say a Monsanto or Phillip Morris — or is there something about tech that positively correlates with “evilness”?

I suspect it’s the second of these.  I don’t believe Silicon Valley is full of fundamentally evil geniuses, but doing business as usual at a successful tech firm means there will be a number of elemental aspects of the culture that take a company down the path to being evil.

Cultism, Loyalism and Self-Selection Bias

A successful tech company is a belief-driven meat grinder that sucks in raw, naïve talent on one end and spits out exhausted and disillusioned husks on the other. To survive in between, you’d better get with the program.

The HR dynamics of a tech startup have been called a meritocracy, where intellectual prowess is the only currency.

But that’s not quite right. Yes, you have to be smart, but it’s more important that you’re loyal. Despite their brilliance, heretics are weeded out and summarily turfed, optionless in more ways than one. A rigidly molded group-think mindset takes over the recruitment process, leading to an intellectually homogeneous monolith.

To be fair, high growth startups need this type of mental cohesion. As blogger Paras Chopra said in a post entitled “Why startups need to be cult-like, “The reason startups should aim to be like cults is because communication is impossible between people with different values.” You can’t go from zero to 100 without this sharing of values.

But necessary or not, this doesn’t change the fact that your average tech star up is a cult, with all the same ideological underpinnings. And the more cult-like a culture, the less likely it is that it will take the time for a little ethical navel-gazing.

A Different Definition of Problem Solving

When all you have is a hammer, everything looks like a nail. And for the engineer, the hammer that fixes everything is technology. But, as academic researchers Emanuel Moss and Jacob Metcalf discovered, this brand of technical solutionism can lead to a corporate environment where ethical problems are ignored because they are open-ended, intractable questions. In a previous column I referred to them as “wicked problems.”

As Moss and Metcalf found, “Organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes ‘intractable, like it’s too big of a problem to tackle.’”

If you take this to the extreme, you get the Cambridge Analytica example, where programmer Christopher Wylie was so focused on the technical aspects of the platform he was building that he lost sight of the ethical monster he was unleashing.

A Question of Leadership

Of course, every cult needs a charismatic leader, and this is abundantly true for tech-based companies. Hubris is a commodity not in short supply among the C-level execs of tech.

It’s not that they’re assholes (well, ethical assholes anyway). It’s just that they’re, umm, highly focused and instantly dismissive of any viewpoint that’s not the same as their own. It’s the same issue I mentioned before about the pitfalls of expertise — but on steroids.

I suspect that if you did an ethical inventory of Mark Zuckerberg, Jeff Bezos, Larry Page, Sergey Brin, Travis Kalanik, Reid Hoffman and the rest, you’d find that — on the whole — they’re not horrible people. It’s just that they have a very specific definition of ethics as it pertains to their company. Anything that falls outside those narrowly defined boundaries is either dismissed or “handled” so it doesn’t get in the way of the corporate mission.

Speaking of corporate missions, leaders and their acolytes often are unaware — often intentionally — of the nuances of unintended consequences. Most tech companies develop platforms that allow disruptive new market-based ecosystems to evolve on their technological foundations. Disruption always unleashes unintended social consequences. When these inevitably happen, tech companies generally handle them one of three ways:

  1. Ignore them, and if that fails…
  2. Deny responsibility, and if that fails…
  3. Briefly apologize, do nothing, and then return to Step 1.

There is a weird type of idol worship in tech. The person atop the org chart is more than an executive. They are corporate gods — and those that dare to be disagreeable are quickly weeded out as heretics. This helps explain why Facebook can be pilloried for attacks on personal privacy and questionable design ethics, yet Mark Zuckerberg still snags a 92% CEO approval rating on Glassdoor.com.These fundamental characteristics help explain why tech companies seem to consistently stumble over to the dark side. But there’s an elephant in the room we haven’t talked about. Almost without exception, tech business models encourage evil behavior. Let’s hold that thought for a future discussion.

A Troubling Prognostication

It’s that time of year again. My inbox is jammed with pitches from PR flacks trying to get some editorial love for their clients. In all my years of writing, I think I have actually taken the bait maybe once or twice. That is an extremely low success rate. So much for targeting.

In early January, many of the pitches offer either reviews of 2019 or predictions for 2020.  I was just about to hit the delete button on one such pitch when something jumped out at me: “The number-one marketing trend for 2020 will be CDPs: customer data platforms.”

I wasn’t surprised by that. It makes sense. I know there’s a truckload of personal data being collected from everyone and their dog. Marketers love platforms. Why wouldn’t these two things come together?

But then I thought more about it — and immediately had an anxiety attack. This is not a good thing. In fact, this is a catastrophically terrible thing. It’s right up there with climate change and populist politics as the biggest world threats that keep me up at night.

To close out 2019,  fellow Insider Maarten Albarda gave you a great guide on where not to spend your money. In that column, he said this: “Remember when connected TVs, Google Glass and the Amazon Fire Phone were going to provide break-through platforms that would force mass marketing out of the box, and into the promised land of end-to-end, personalized one-on-one marketing?”

Ah, marketing nirvana: the Promised Land! The Holy Grail of personalized marketing. A perfect, friction-free direct connection between the marketer and the consumer.

Maarten went on to say that social media is one of the channels you shouldn’t be throwing money into, saying, “It’s also true that we have yet to see a compelling case where social media played a significant role in the establishment or continued success of a brand or service.”

I’m not sure I agree with this, though I admit I don’t have the empirical data to back up my opinion. But I do have another, darker reason why we should shut off the taps providing the flow of revenue to the usual social suspects. Social media based on an advertising revenue model is a cancerous growth — and we have to shut off its blood flow.

Personalized one-to-one marketing — that Promised Land —  cannot exist without a consistent and premeditated attack on our privacy. It comes at a price we should not be prepared to pay.

It depends on us trusting profit-driven corporations that have proven again and again that they shouldn’t be trusted. It is fueled by our darkest and least admirable motives.

The ecosystem that is required to enable one-to-one marketing is a cesspool of abuse and greed. In a pristine world of marketing with players who sport shiny ideals and rock-solid ethics, maybe it would be okay. Maybe. Personally, I wouldn’t take that bet. But in the world we actually live and work in, it’s a sure recipe for disaster.

To see just how subversive data-driven marketing can get, read “Mindf*ck” by Christopher Wylie. If that name sounds vaguely familiar to you, let me jog your memory. Wylie is the whistleblower who first exposed the Cambridge Analytica scandal. An openly gay, liberal, pink-haired Canadian, he seems an unlikely candidate to be the architect of the data-driven “Mindf*ck” machine that drove Trump into office and the Brexit vote over the 50% threshold.

Wylie admits to being blinded by the tantalizing possibilities of what he was working on at Cambridge Analytica: “Every day, I overlooked, ignored, or explained away warning signs. With so much intellectual freedom, and with scholars from the world’s leading universities telling me we were on the cusp of ‘revolutionizing’ social science, I had gotten greedy, ignoring the dark side of what we were doing.”

But Wylie is more than a whistleblower. He’s a surprisingly adept writer who has a firm grasp on not just the technical aspects, but also the psychology behind the weaponization of data. If venture capitalist Roger McNamee’s tell-all expose of Facebook, “Zucked,”  kept you up at night, “Mindf*ck” will give you screaming night terrors.

I usually hold off jumping on the year-end prognostication bandwagon, because I’ve always felt it’s a mug’s game. I would like to think that 2020 will be the year when the world becomes “woke” to the threat of profit-driven data abuse — but based on our collective track record of ignoring inconvenient truths, I’m not holding my breath.

'Twas the Night Before the Internet

Today, just one day before Christmas, my mind swings to the serendipitous side. I don’t know about you, but for me, 2019 has been a trying year. While you would never know it by the collection of columns I’ve produced over the past 12 months, I have tried to find the glimpses of light in the glowering darkness.

Serendipity Sidetrack #1: “Glowering” is a word we don’t use much anymore. It refers to someone who has a dark, angry expression on their faces. As such, it’s pretty timely and relevant. You’d think we would use it more.

One of my personal traditions during the holidays is to catch one of the fourteen billion airings of “It’s a Wonderful Life.” Yes, it’s quintessentially Capraesque. Yes, it’s corny as hell. But give me a big seasonal heaping helping of Jimmy Stewart, Donna Reed and that “crummy little town” known as Bedford Falls.

Serendipity Sidetrack #2: The movie “It’s a Wonderful Life” is based on a 1939 short story by Philip Van Doren Stern. He tried to get it published for several years with no success. He finally self-published it and sent it to 200 friends as a 24-page Christmas card. One of these cards ended up on the desk of an executive at RKO pictures, who convinced the studio to buy the rights in 1943 as a vehicle for its star Cary Grant.

That movie never got made and the project was shelved for the rest of World War II. After the war, director Frank Capra read the script and chose it as his first Hollywood movie after making war documentaries and training films.

The movie was panned by critics and ignored by audiences. It was a financial disaster, eventually leading to the collapse of Capra’s new production company, Liberty Films. One other stray tidbit: during the scene at the high school dance where the gym floor opens over the pool (which was shot at Beverly Hills High School), Mary’s obnoxious date Freddie is played by an adult Carl “Alfalfa” Switzer, from the “Our Gang” series.

But I digress. This seasonal ritual got me thinking along “what if” lines. We learn what Bedford Falls would be like if George Bailey was never born. But maybe the same narrative machinery could be applied to another example: What would Christmas (or your seasonal celebration of choice) be like if the Internet had never happened?

As I pondered this, I realized that there’s really only one aspect of the internet that materially impacts what the holidays have become. These celebrations revolve around families, so if we were going to look for changes wrought by technology, we have to look at the structure and dynamics of the family unit.

Serendipity Sidetrack #3: Christmas was originally not a family-based celebration. It became so in Victorian England thanks to Queen Victoria, Prince Albert and Charles Dickens. After the marriage of the royal couple, Albert brought the German tradition of the Christmas tree to Windsor Castle. Pictures of the royals celebrating with family around the tree firmly shifted the holiday towards its present warm-hearted family center.

In 1843, Dickens added social consciousness to the party with the publication of “A Christmas Carol.” The holiday didn’t take its detour towards overt consumerism until the prosperity of the 1950s.

But back to my rapidly unraveling narrative thread: What would Christmas be like without the Internet?

I have celebrated Christmas in two different contexts: The first, in my childhood and the second with my own wife and family.

I grew up with just my immediate family in rural Alberta, geographically distant from aunts, uncles and cousins. For dinner there would be six of us around the table. We might try to call an aunt or uncle who lived some 2,000 miles away, but usually the phone lines were so busy we couldn’t get through.

The day was spent with each other and usually involved a few card games, a brief but brisk walk and getting ready for Christmas dinner. It was low-key, but I still have many fond memories of my childhood Christmases.

Then I got married. My wife, who is Italian, has dozens and dozens and dozens of relatives within a stone’s throw in any direction. For us, Christmas is now a progressive exercise to see just how many people can be crammed into the same home. It begins at our house for Christmas morning with the “immediate” family (remember, I use the term in its Italian context). The head count varies between 18 and 22 people.

Then, we move to Christmas dinner with the “extended” family. The challenge here is finding a house big enough, because we are now talking 50 to 75 people. It’s loud, it’s chaotic — and I couldn’t imagine Christmas any other way.

The point here is how the Internet has shifted the nature of the celebration. In my lifespan, I have seen two big shifts, both to do with the nature of our personal connections. And like most things with technology, one has been wonderful while the other has been troubling.

First of all, thanks to the Internet, we can extend our family celebrations beyond the limits of geography. I can now connect with family members who don’t live in the same town.

But, ironically, the same technology has been eroding the bonds we have with the family we are physically present with. We may be in the same room, but our minds are elsewhere, preoccupied with the ever-present screens in our pockets or purses. In my pre-Internet memories of Christmas, we were fully there with our families. Now, this is rarely the case.

And one last thought. I find — sadly — that Christmas is just one more occasion to be shared through social media. For some of us, it’s not so much who we’re with or what we’re doing, but about how it will look in our Instagram post.

The Ruts of Our Brain

We are not – by nature – open minded. In fact, as we learn something, the learning creates neural pathways in our brain that we tend to stick to. In other words, the more we learn, the bigger the ruts get.

Our brains are this way by design. At its core, the brain is an energy saving device. If there are two options open to it, one requiring more cognitive processing and one requiring less, the brain will default to the less resource intensive option.

This puts expertise into an interesting new perspective. In a recent study, researchers from Cold Spring Harbor Laboratory, Columbia University, University College London and Flatiron Institute found that when mice learn a new task, the neurons in their brain actually change as they move from being a novice to an expert. At the beginning as they’re learning the task, the required neurons don’t “fire” until the brain makes a decision. But, as expertise is gained, those same neurons start responding before they’re even needed. It’s essentially Hebbian Theory (named after neurologist Donald Hebbs) in action: the neurons that fire together eventually wire together.

We tend to think of experts as bringing a well-honed subset of intellectual knowledge to a question. And that is true, as long as the question is well within their area of expertise. But the minute an expert ventures outside of their “rut” they begin to flounder. In fact, even when they are in their area of expertise but are asked to predict where that path that may lead in the future – beyond their current rut – their expertise doesn’t help them. In 2005 psychologist Phillip Tetlock published “Expert Political Judgement” – a book showing the results of a 20-year long study on the prediction track record of experts. It wasn’t good. According to a New Yorker review of the book, “Human beings who spend their lives studying the state of the world…are poorer forecasters than dart-throwing monkeys”

Why? Well, just like those mice in the above-mentioned study, once we have a rut, our brains like to stick to the rut. It’s just easier for us. And experts have very deep ruts. The deeper the rut, the more effort it takes to peer above it. As Tetlock found, when it comes to predicting what might happen in some area in the future, even if you happen to be an expert in that area, you’d probably be better off flipping a coin than relying on your brain.

By the way, for most of human history, this has been a feature, not a bug. Saving cognitive energy is a wonderful evolutionary advantage. If you keep doing the same thing over and over, eventually the brain pre-lights the neuronal path required, saving itself time and energy. The brain is directing anticipated traffic at faster than the speed of thought. And it’s doing it so well, it would take a significant amount of cognitive horsepower to derail this action.

Like I said, in a fairly predictably world of cause and effect, this system works. But in an uncertain world full of wild card complexity, it can be crippling.

Complex worlds require Foxes, not Hedgehogs. This analogy also comes from Tetlock’s book. According to an old Greek fable, “The fox knows many things but the hedgehog knows just one thing.” To that I would add; the fox knows a little about many things, but the hedgehog knows a lot about one thing. In other words, the hedgehog is an expert.

In Tetlock’s study, people with “fox” qualities had a significantly better track record then “hedgehogs” when it came to predicting the future. Their brains were better able to take the time to synthesize the various data inputs required to deal with the complexity of crystal balling the future because they weren’t barrelling down a pre-ordained path that had been carved by years of accumulated expertise.

But it’s not just expertise that creates these ruts in our brains. The same pattern plays out when we look at the impact of our beliefs play in how open-minded we are. The stronger the belief, the deeper the rut.

Again, we have to remember that this tendency of our brains to form well-travelled grooves over time has been crafted by the blind watchmaker of evolution. But that doesn’t make it any less troubling when we think about the limitations it imposes in a more complex world. This is especially true when new technologies deliberately leverage our vulnerability in this area. Digital platforms ruthlessly eliminate the real estate that lies between perspectives. The ideological landscape in which foxes can effectively operate is disappearing. Increasingly we grasp for expertise – whether it’s on the right or left of any particular topic – with the goal of preserving our own mental ruts.

And as the ruts get deeper, foxes are becoming an endangered species.