Saying So Long to SEMPO

Yesterday afternoon, while I was in line at the grocery store, my phone pinged. I was mentioned in a Twitter post. For me, that’s becoming a pretty uncommon experience. So I checked the post.  And that’s how I found out that SEMPO is no more.

The tweet was from Dana Todd, who was responding to a Search Engine Journalarticle by Roger Montti about the demise of SEMPO. For those of you who don’t know SEMPO: it was the Search Engine Marketing Professionals Organization.

It was a big part of my life during what seems like a lifetime ago. Todd was even more involved. Hence the tweet.

Increasingly I find my remaining half-life in digital consists of an infrequent series of “remember-when” throwbacks. This will be one of those.

Todd’s issue with the article was that much of the 17-year history of the organization was glossed over, as Montti chose to focus mainly on the controversies of the first year or two of its existence.

As Todd said, “You only dredged up the early stages of the organization, in its infancy as we struggled to gain respect and traction, and were beset by naysayers who looked for a reason we should fail. We didn’t fail.”

She then added, “There is far more to the SEMPO story, and far more notable people who put in blood sweat and tears to build not just the organization, but the entire industry.”

I was one of those people. But before that, I was also one of the early naysayers.

SEMPO started in 2003. I didn’t join until 2004. I spent at least part of that first year joining the chorus bitching about the organization. And then I realized that I could either bitch from the outside — or I could effect change from the inside.  

After joining, I quickly found myself on that same SEMPO board that that I’d been complaining about. In 2005, I became co-chair of the research committee. In 2006, I became the chair of SEMPO. I served in that role for two years and eventually stepped down from SEMPO at the same time I stepped away from the search industry.

Like Todd (who was the president of SEMPO for part of the time I was the chairman), I am proud of what we did, and extraordinarily proud of the team that made it happen. Many of the people I admired most in the industry served with me on that board.

Todd will always be one of my favorite search people. But I also had the privilege of serving with Jeff Pruit, Kevin Lee, Bill Hunt, Dave Fall, Christine Churchill and the person who got the SEMPO ball rolling, along with Todd: Barbara Coll. There were many, many others.

Now, SEMPO is being absorbed by the Digital Analytics Association, which, according to its announcement,  “is committed to helping former SEMPO members become fully integrated into DAA, and will be forming a special interest group (SIG) for search analytics.”

I’ve got to admit: That hurts. Being swallowed up, becoming nothing more than a special interest group, is a rather ignoble end for the association I gave so much to.

But as anyone who has raised a child can tell you, you know you’ve been successful when they no longer need you. And that’s how I choose to interpret this event. The search industry no longer needs SEMPO, at least as a stand-alone organization.

And if that’s the case, then SEMPO knocked it out of the park. Because that sure as hell wasn’t true back in 2003.

Search in 2003 was the Wild West. According to legend, there were white-hat SEOs and black-hat SEOs.

But truth be told, most of us wore hats that were some shade of grey.

The gunslingers of natural search (or organic SEO) were slowly and very reluctantly giving up their turf to the encroaching new merchants of paid search. Google Adwords had only been around for three years, but its launch introduced a whole new dynamic to the ecosystem. Google suddenly had to start a relationship with search marketers.

Before that, the only Google attempt made to reach out was thanks to a rogue mystery poster on SEO industry forums named “googleguy” (later suspected to be the search quality team lead Matt Cutts).  To call search an industry would be stretching the term to its breaking point.

The introduction of paid search was creating a two-sided marketplace, and that was forcing search to become more civilized.

The process of civilization is always difficult. It requires the establishment of trust and respect, two commodities that were in desperately short supply in search circa 2003.

SEMPO was the one organization that did the most to bring civilization to the search marketplace. It gave Google a more efficient global conduit to thousands of search marketers. And it gave those search marketers a voice that Google would actually pay some attention to.

But it was more than just starting a conversation. SEMPO challenged search marketers to think beyond their own interests. The organization laid the foundation for a more sustainable and equitable search ecosystem. If SEMPO accomplished anything to be proud of, it was in preventing the Tragedy of the Commons from killing search before it had a chance to establish itself as the fastest growing advertising marketplace in history.

Dana Todd wrapped up her extended Twitter post by writing, “I can say confidently Google wouldn’t be worth $1T without us. SEMPO — you mattered.”

Dana, just like in the old SEMPO days when we double-teamed a message, you said it better than I ever could.

And Google? You’re welcome.

The Fundamentals of an Evil Marketplace

Last week, I talked about the nature of tech companies and why this leads to them being evil. But as I said, there was an elephant in the room I didn’t touch on — and that’s the nature of the market itself. The platform-based market also has inherent characteristics that lead toward being evil.

The problem is that corporate ethics are usually based on the philosophies of Milton Friedman, an economist whose heyday was in the 1970s. Corporations are playing by a rule book that is tragically out of date.

Beware the Invisible Hand

Friedman said, “The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.”

This is a porting over of Adam Smith’s “Invisible Hand” theory from economics to ethics: the idea that an open and free marketplace is self-regulating and, in the end, the model that is the most virtuous to the greatest number of people will take hold.

That was a philosophy born in another time, referring to a decidedly different market. Friedman’s “virtue” depends on a few traditional market conditions, idealized in the concept of a perfect market: “a market where the sellers of a product or service are free to compete fairly, and sellers and buyers have complete information.”

Inherent in Friedman’s definition of market ethics is the idea of a deliberate transaction, a value trade driven by rational thought. This is where the concept of “complete information” comes in. This information is what’s required for a rational evaluation of the value trade. When we talk about the erosion of ethics we see in tech, we quickly see that the prerequisite of a deliberate and rational transaction is missing — and with it, the conditions needed for an ethical “invisible hand.”

The other assumption in Friedman’s definition is a marketplace that encourages open and healthy competition. This gives buyers the latitude to make the choice that best aligns with their requirements.

But when we’re talking about markets that tend to trend towards evil behaviors, we have to understand that there’s a slippery slope that ends in a place far different than the one Friedman idealized.

Advertising as a Revenue Model

For developers of user-dependent networks like Google and Facebook, using advertising sales for revenue was the path of least resistance for adoption — and, once adopted by users, to profitability. It was a model co-opted from other forms of media, so everybody was familiar with it. But, in the adoption of that model, the industry took several steps away from the idea of a perfect market.

First of all, you have significantly lowered the bar required for that rational value exchange calculation. For users, there is no apparent monetary cost. Our value judgement mechanisms idle down because it doesn’t appear as if the protection they provide is needed.

In fact, the opposite happens. The reward center of our brain perceives a bargain and starts pumping the accelerator. We rush past the accept buttons to sign up, thrilled at the new capabilities and convenience we receive for free. That’s the first problem.

The second is that the minute you introduce advertising, you lose the transparency that’s part of the perfect market. There is a thick layer of obfuscation that sits between “users” and “producers.” The smoke screen is required because of the simple reality that the best interests of the user are almost never aligned with the best interests of the advertiser.

In this new marketplace, advertising is a zero-sum game. For the advertiser to win, the user has to lose. The developer of platforms hide this simple arithmetic behind a veil of secrecy and baffling language.

Products That are a Little Too Personal

The new marketplace is different in another important way: The products it deals in are unlike any products we’ve ever seen before.

The average person spends about a third of his or her time online, mostly interacting with a small handful of apps and platforms. Facebook alone accounts for almost 20% of all our waking time.

This reliance on these products reinforces our belief that we’re getting the bargain of a lifetime: All the benefits the platform provides are absolutely free to us! Of course, in the time we spend online, we are feeding these tools a constant stream of intimately personal information about ourselves.

What is lurking behind this benign facade is a troubling progression of addictiveness. Because revenue depends on advertising sales, two factors become essential to success: the attention of users, and information about them.

An offer of convenience or usefulness “for free” is the initial hook, but then it becomes essential to entice them to spend more time with the platform and also to volunteer more information about themselves. The most effective way to do this is to make them more and more dependent on the platform.

Now, you could build conscious dependency by giving users good, rational reasons to keep coming back. Or, you could build dependence subconsciously, by creating addicts. The first option is good business that follows Friedman’s philosophy. The second option is just evil. Many tech platforms — Facebook included — have chosen to go down both paths.

The New Monopolies

The final piece of Friedman’s idealized marketplace that’s missing is the concept of healthy competition. In a perfect marketplace, the buyer’s cost of switching  is minimal. You have a plethora of options to choose from, and you’re free to pursue the one best for you.

This is definitely not the case in the marketplace of online platforms and tools like Google and Facebook. Because they are dependent on advertising revenues, their survival is linked to audience retention. To this end, they have constructed virtual monopolies by ruthlessly eliminating or buying up any potential competitors.

Further, under the guise of convenience, they have imposed significant costs on those that do choose to leave. The net effect of this is that users are faced with a binary decision: Opt into the functionality and convenience offered, or opt out. There are no other choices.

Whom Do You Serve?

Friedman also said in a 1970 paper that the only social responsibility of a business is to Increase its profits. But this begs the further question, “What must be done — and for whom — to increase profits?” If it’s creating a better product so users buy more, then there is an ethical trickle-down effect that should benefit all.

But this isn’t the case if profitability is dependent on selling more advertising. Now we have to deal with an inherent ethical conflict. On one side, you have the shareholders and advertisers. On the other, you have users. As I said, for one to win, the other must lose. If we’re looking for the root of all evil, we’ll probably find it here.

Why Good Tech Companies Keep Being Evil

You’d think we’d have learned by now. But somehow it still comes as a shock to us when tech companies are exposed as having no moral compass.

Slate recently released what it called the “Evil List”  of 30 tech companies compiled through a ballot sent out to journalists, scholars, analysts, advocates and others. Slate asked them which companies were doing business in the way that troubled them most. Spoiler alert: Amazon, Facebook and Google topped the list.  But they weren’t alone. Rounding out the top 10, the list of culprits included Twitter, Apple, Microsoft and Uber.

Which begs the question: Are tech companies inherently evil — like, say a Monsanto or Phillip Morris — or is there something about tech that positively correlates with “evilness”?

I suspect it’s the second of these.  I don’t believe Silicon Valley is full of fundamentally evil geniuses, but doing business as usual at a successful tech firm means there will be a number of elemental aspects of the culture that take a company down the path to being evil.

Cultism, Loyalism and Self-Selection Bias

A successful tech company is a belief-driven meat grinder that sucks in raw, naïve talent on one end and spits out exhausted and disillusioned husks on the other. To survive in between, you’d better get with the program.

The HR dynamics of a tech startup have been called a meritocracy, where intellectual prowess is the only currency.

But that’s not quite right. Yes, you have to be smart, but it’s more important that you’re loyal. Despite their brilliance, heretics are weeded out and summarily turfed, optionless in more ways than one. A rigidly molded group-think mindset takes over the recruitment process, leading to an intellectually homogeneous monolith.

To be fair, high growth startups need this type of mental cohesion. As blogger Paras Chopra said in a post entitled “Why startups need to be cult-like, “The reason startups should aim to be like cults is because communication is impossible between people with different values.” You can’t go from zero to 100 without this sharing of values.

But necessary or not, this doesn’t change the fact that your average tech star up is a cult, with all the same ideological underpinnings. And the more cult-like a culture, the less likely it is that it will take the time for a little ethical navel-gazing.

A Different Definition of Problem Solving

When all you have is a hammer, everything looks like a nail. And for the engineer, the hammer that fixes everything is technology. But, as academic researchers Emanuel Moss and Jacob Metcalf discovered, this brand of technical solutionism can lead to a corporate environment where ethical problems are ignored because they are open-ended, intractable questions. In a previous column I referred to them as “wicked problems.”

As Moss and Metcalf found, “Organizational practices that facilitate technical success are often ported over to ethics challenges. This is manifested in the search for checklists, procedures, and evaluative metrics that could break down messy questions of ethics into digestible engineering work. This optimism is counterweighted by a concern that, even when posed as a technical question, ethics becomes ‘intractable, like it’s too big of a problem to tackle.’”

If you take this to the extreme, you get the Cambridge Analytica example, where programmer Christopher Wylie was so focused on the technical aspects of the platform he was building that he lost sight of the ethical monster he was unleashing.

A Question of Leadership

Of course, every cult needs a charismatic leader, and this is abundantly true for tech-based companies. Hubris is a commodity not in short supply among the C-level execs of tech.

It’s not that they’re assholes (well, ethical assholes anyway). It’s just that they’re, umm, highly focused and instantly dismissive of any viewpoint that’s not the same as their own. It’s the same issue I mentioned before about the pitfalls of expertise — but on steroids.

I suspect that if you did an ethical inventory of Mark Zuckerberg, Jeff Bezos, Larry Page, Sergey Brin, Travis Kalanik, Reid Hoffman and the rest, you’d find that — on the whole — they’re not horrible people. It’s just that they have a very specific definition of ethics as it pertains to their company. Anything that falls outside those narrowly defined boundaries is either dismissed or “handled” so it doesn’t get in the way of the corporate mission.

Speaking of corporate missions, leaders and their acolytes often are unaware — often intentionally — of the nuances of unintended consequences. Most tech companies develop platforms that allow disruptive new market-based ecosystems to evolve on their technological foundations. Disruption always unleashes unintended social consequences. When these inevitably happen, tech companies generally handle them one of three ways:

  1. Ignore them, and if that fails…
  2. Deny responsibility, and if that fails…
  3. Briefly apologize, do nothing, and then return to Step 1.

There is a weird type of idol worship in tech. The person atop the org chart is more than an executive. They are corporate gods — and those that dare to be disagreeable are quickly weeded out as heretics. This helps explain why Facebook can be pilloried for attacks on personal privacy and questionable design ethics, yet Mark Zuckerberg still snags a 92% CEO approval rating on Glassdoor.com.These fundamental characteristics help explain why tech companies seem to consistently stumble over to the dark side. But there’s an elephant in the room we haven’t talked about. Almost without exception, tech business models encourage evil behavior. Let’s hold that thought for a future discussion.

A Troubling Prognostication

It’s that time of year again. My inbox is jammed with pitches from PR flacks trying to get some editorial love for their clients. In all my years of writing, I think I have actually taken the bait maybe once or twice. That is an extremely low success rate. So much for targeting.

In early January, many of the pitches offer either reviews of 2019 or predictions for 2020.  I was just about to hit the delete button on one such pitch when something jumped out at me: “The number-one marketing trend for 2020 will be CDPs: customer data platforms.”

I wasn’t surprised by that. It makes sense. I know there’s a truckload of personal data being collected from everyone and their dog. Marketers love platforms. Why wouldn’t these two things come together?

But then I thought more about it — and immediately had an anxiety attack. This is not a good thing. In fact, this is a catastrophically terrible thing. It’s right up there with climate change and populist politics as the biggest world threats that keep me up at night.

To close out 2019,  fellow Insider Maarten Albarda gave you a great guide on where not to spend your money. In that column, he said this: “Remember when connected TVs, Google Glass and the Amazon Fire Phone were going to provide break-through platforms that would force mass marketing out of the box, and into the promised land of end-to-end, personalized one-on-one marketing?”

Ah, marketing nirvana: the Promised Land! The Holy Grail of personalized marketing. A perfect, friction-free direct connection between the marketer and the consumer.

Maarten went on to say that social media is one of the channels you shouldn’t be throwing money into, saying, “It’s also true that we have yet to see a compelling case where social media played a significant role in the establishment or continued success of a brand or service.”

I’m not sure I agree with this, though I admit I don’t have the empirical data to back up my opinion. But I do have another, darker reason why we should shut off the taps providing the flow of revenue to the usual social suspects. Social media based on an advertising revenue model is a cancerous growth — and we have to shut off its blood flow.

Personalized one-to-one marketing — that Promised Land —  cannot exist without a consistent and premeditated attack on our privacy. It comes at a price we should not be prepared to pay.

It depends on us trusting profit-driven corporations that have proven again and again that they shouldn’t be trusted. It is fueled by our darkest and least admirable motives.

The ecosystem that is required to enable one-to-one marketing is a cesspool of abuse and greed. In a pristine world of marketing with players who sport shiny ideals and rock-solid ethics, maybe it would be okay. Maybe. Personally, I wouldn’t take that bet. But in the world we actually live and work in, it’s a sure recipe for disaster.

To see just how subversive data-driven marketing can get, read “Mindf*ck” by Christopher Wylie. If that name sounds vaguely familiar to you, let me jog your memory. Wylie is the whistleblower who first exposed the Cambridge Analytica scandal. An openly gay, liberal, pink-haired Canadian, he seems an unlikely candidate to be the architect of the data-driven “Mindf*ck” machine that drove Trump into office and the Brexit vote over the 50% threshold.

Wylie admits to being blinded by the tantalizing possibilities of what he was working on at Cambridge Analytica: “Every day, I overlooked, ignored, or explained away warning signs. With so much intellectual freedom, and with scholars from the world’s leading universities telling me we were on the cusp of ‘revolutionizing’ social science, I had gotten greedy, ignoring the dark side of what we were doing.”

But Wylie is more than a whistleblower. He’s a surprisingly adept writer who has a firm grasp on not just the technical aspects, but also the psychology behind the weaponization of data. If venture capitalist Roger McNamee’s tell-all expose of Facebook, “Zucked,”  kept you up at night, “Mindf*ck” will give you screaming night terrors.

I usually hold off jumping on the year-end prognostication bandwagon, because I’ve always felt it’s a mug’s game. I would like to think that 2020 will be the year when the world becomes “woke” to the threat of profit-driven data abuse — but based on our collective track record of ignoring inconvenient truths, I’m not holding my breath.

‘Twas the Night Before the Internet

Today, just one day before Christmas, my mind swings to the serendipitous side. I don’t know about you, but for me, 2019 has been a trying year. While you would never know it by the collection of columns I’ve produced over the past 12 months, I have tried to find the glimpses of light in the glowering darkness.

Serendipity Sidetrack #1: “Glowering” is a word we don’t use much anymore. It refers to someone who has a dark, angry expression on their faces. As such, it’s pretty timely and relevant. You’d think we would use it more.

One of my personal traditions during the holidays is to catch one of the fourteen billion airings of “It’s a Wonderful Life.” Yes, it’s quintessentially Capraesque. Yes, it’s corny as hell. But give me a big seasonal heaping helping of Jimmy Stewart, Donna Reed and that “crummy little town” known as Bedford Falls.

Serendipity Sidetrack #2: The movie “It’s a Wonderful Life” is based on a 1939 short story by Philip Van Doren Stern. He tried to get it published for several years with no success. He finally self-published it and sent it to 200 friends as a 24-page Christmas card. One of these cards ended up on the desk of an executive at RKO pictures, who convinced the studio to buy the rights in 1943 as a vehicle for its star Cary Grant.

That movie never got made and the project was shelved for the rest of World War II. After the war, director Frank Capra read the script and chose it as his first Hollywood movie after making war documentaries and training films.

The movie was panned by critics and ignored by audiences. It was a financial disaster, eventually leading to the collapse of Capra’s new production company, Liberty Films. One other stray tidbit: during the scene at the high school dance where the gym floor opens over the pool (which was shot at Beverly Hills High School), Mary’s obnoxious date Freddie is played by an adult Carl “Alfalfa” Switzer, from the “Our Gang” series.

But I digress. This seasonal ritual got me thinking along “what if” lines. We learn what Bedford Falls would be like if George Bailey was never born. But maybe the same narrative machinery could be applied to another example: What would Christmas (or your seasonal celebration of choice) be like if the Internet had never happened?

As I pondered this, I realized that there’s really only one aspect of the internet that materially impacts what the holidays have become. These celebrations revolve around families, so if we were going to look for changes wrought by technology, we have to look at the structure and dynamics of the family unit.

Serendipity Sidetrack #3: Christmas was originally not a family-based celebration. It became so in Victorian England thanks to Queen Victoria, Prince Albert and Charles Dickens. After the marriage of the royal couple, Albert brought the German tradition of the Christmas tree to Windsor Castle. Pictures of the royals celebrating with family around the tree firmly shifted the holiday towards its present warm-hearted family center.

In 1843, Dickens added social consciousness to the party with the publication of “A Christmas Carol.” The holiday didn’t take its detour towards overt consumerism until the prosperity of the 1950s.

But back to my rapidly unraveling narrative thread: What would Christmas be like without the Internet?

I have celebrated Christmas in two different contexts: The first, in my childhood and the second with my own wife and family.

I grew up with just my immediate family in rural Alberta, geographically distant from aunts, uncles and cousins. For dinner there would be six of us around the table. We might try to call an aunt or uncle who lived some 2,000 miles away, but usually the phone lines were so busy we couldn’t get through.

The day was spent with each other and usually involved a few card games, a brief but brisk walk and getting ready for Christmas dinner. It was low-key, but I still have many fond memories of my childhood Christmases.

Then I got married. My wife, who is Italian, has dozens and dozens and dozens of relatives within a stone’s throw in any direction. For us, Christmas is now a progressive exercise to see just how many people can be crammed into the same home. It begins at our house for Christmas morning with the “immediate” family (remember, I use the term in its Italian context). The head count varies between 18 and 22 people.

Then, we move to Christmas dinner with the “extended” family. The challenge here is finding a house big enough, because we are now talking 50 to 75 people. It’s loud, it’s chaotic — and I couldn’t imagine Christmas any other way.

The point here is how the Internet has shifted the nature of the celebration. In my lifespan, I have seen two big shifts, both to do with the nature of our personal connections. And like most things with technology, one has been wonderful while the other has been troubling.

First of all, thanks to the Internet, we can extend our family celebrations beyond the limits of geography. I can now connect with family members who don’t live in the same town.

But, ironically, the same technology has been eroding the bonds we have with the family we are physically present with. We may be in the same room, but our minds are elsewhere, preoccupied with the ever-present screens in our pockets or purses. In my pre-Internet memories of Christmas, we were fully there with our families. Now, this is rarely the case.

And one last thought. I find — sadly — that Christmas is just one more occasion to be shared through social media. For some of us, it’s not so much who we’re with or what we’re doing, but about how it will look in our Instagram post.

The Ruts of Our Brain

We are not – by nature – open minded. In fact, as we learn something, the learning creates neural pathways in our brain that we tend to stick to. In other words, the more we learn, the bigger the ruts get.

Our brains are this way by design. At its core, the brain is an energy saving device. If there are two options open to it, one requiring more cognitive processing and one requiring less, the brain will default to the less resource intensive option.

This puts expertise into an interesting new perspective. In a recent study, researchers from Cold Spring Harbor Laboratory, Columbia University, University College London and Flatiron Institute found that when mice learn a new task, the neurons in their brain actually change as they move from being a novice to an expert. At the beginning as they’re learning the task, the required neurons don’t “fire” until the brain makes a decision. But, as expertise is gained, those same neurons start responding before they’re even needed. It’s essentially Hebbian Theory (named after neurologist Donald Hebbs) in action: the neurons that fire together eventually wire together.

We tend to think of experts as bringing a well-honed subset of intellectual knowledge to a question. And that is true, as long as the question is well within their area of expertise. But the minute an expert ventures outside of their “rut” they begin to flounder. In fact, even when they are in their area of expertise but are asked to predict where that path that may lead in the future – beyond their current rut – their expertise doesn’t help them. In 2005 psychologist Phillip Tetlock published “Expert Political Judgement” – a book showing the results of a 20-year long study on the prediction track record of experts. It wasn’t good. According to a New Yorker review of the book, “Human beings who spend their lives studying the state of the world…are poorer forecasters than dart-throwing monkeys”

Why? Well, just like those mice in the above-mentioned study, once we have a rut, our brains like to stick to the rut. It’s just easier for us. And experts have very deep ruts. The deeper the rut, the more effort it takes to peer above it. As Tetlock found, when it comes to predicting what might happen in some area in the future, even if you happen to be an expert in that area, you’d probably be better off flipping a coin than relying on your brain.

By the way, for most of human history, this has been a feature, not a bug. Saving cognitive energy is a wonderful evolutionary advantage. If you keep doing the same thing over and over, eventually the brain pre-lights the neuronal path required, saving itself time and energy. The brain is directing anticipated traffic at faster than the speed of thought. And it’s doing it so well, it would take a significant amount of cognitive horsepower to derail this action.

Like I said, in a fairly predictably world of cause and effect, this system works. But in an uncertain world full of wild card complexity, it can be crippling.

Complex worlds require Foxes, not Hedgehogs. This analogy also comes from Tetlock’s book. According to an old Greek fable, “The fox knows many things but the hedgehog knows just one thing.” To that I would add; the fox knows a little about many things, but the hedgehog knows a lot about one thing. In other words, the hedgehog is an expert.

In Tetlock’s study, people with “fox” qualities had a significantly better track record then “hedgehogs” when it came to predicting the future. Their brains were better able to take the time to synthesize the various data inputs required to deal with the complexity of crystal balling the future because they weren’t barrelling down a pre-ordained path that had been carved by years of accumulated expertise.

But it’s not just expertise that creates these ruts in our brains. The same pattern plays out when we look at the impact of our beliefs play in how open-minded we are. The stronger the belief, the deeper the rut.

Again, we have to remember that this tendency of our brains to form well-travelled grooves over time has been crafted by the blind watchmaker of evolution. But that doesn’t make it any less troubling when we think about the limitations it imposes in a more complex world. This is especially true when new technologies deliberately leverage our vulnerability in this area. Digital platforms ruthlessly eliminate the real estate that lies between perspectives. The ideological landscape in which foxes can effectively operate is disappearing. Increasingly we grasp for expertise – whether it’s on the right or left of any particular topic – with the goal of preserving our own mental ruts.

And as the ruts get deeper, foxes are becoming an endangered species.

Just in Time for Christmas: More Search Eye-Tracking

The good folks over at the Nielsen Norman Group have released a new search eye tracking report. The findings are quite similar to one my former company — Mediative — did a number of years ago (this link goes to a write-up about the study. Unfortunately, the link to the original study is broken. *Insert head smack here).

In the Nielsen Norman study, the two authors — Kate Moran and Cami Goray — looked at how a more visually rich and complex search results page would impact user interaction with the page. The authors of the report called the sum of participant interactions a “Pinball Pattern”: “Today, we find that people’s attention is distributed on the page and that they process results more nonlinearly than before. We observed so much bouncing between various elements across the page that we can safely define a new SERP-processing gaze pattern — the pinball pattern.”

While I covered this at some length when the original Mediative report came out in 2014 (in three separate columns: 1,2 & 3), there are some themes that bear repeating. Unfortunately, I found the study’s authors missed what I think are some of the more interesting implications. 

In the days of the “10 Blue Links” search results page, we used the same scanning strategy no matter what our intent was. In an environment where the format never changes, you can afford to rely on a stable and consistent strategy. 

In our first eye tracking study, published in 2004, this consistent strategy led to something we called the Golden Triangle. But those days are over.

Today, when every search result can look a little bit different, it comes as no surprise that every search “gaze plot” (the path the eyes take through the results page) will also be different. Let’s take a closer look at the reasons for this. 

SERP Eye Candy

In the Nielsen Norman study, the authors felt “visual weighting” was the main factor in creating the “Pinball Pattern”: “The visual weight of elements on the page drives people’s scanning patterns. Because these elements are distributed all over the page and because some SERPs have more such elements than others, people’s gaze patterns are not linear. The presence and position of visually compelling elements often affect the visibility of the organic results near them.”

While the visual impact of the page elements is certainly a factor, I think it’s only part of the answer. I believe a bigger, and more interesting, factor is how the searcher’s brain and its searching strategies have evolved in lockstep with a more visually complex results page. 

The Importance of Understanding Intent

The reason why we see so much variation in scan patterns is that there is also extensive variation in searchers’ intent. The exact same search query could be used by someone intent on finding an online or physical place to purchase a product, comparing prices on that product, looking to learn more about the technical specs of that product, looking for how-to videos on the use of the product, or looking for consumer reviews on that product.

It’s the same search, but with many different intents. And each of those intents will result in a different scanning pattern. 

Predetermined Page Visualizations

I really don’t believe we start each search page interaction with a blank slate, passively letting our eyes be dragged to the brightest, shiniest object on the page. I think that when we launch the search, our intent has already created an imagined template for the page we expect to see. 

We have all used search enough to be fairly accurate at predicting what the page elements might be: thumbnails of videos or images, a map showing relevant local results, perhaps a Knowledge Graph result in the lefthand column. 

Yes, the visual weighting of elements act as an anchor to draw the eye, but I believe the eye is using this anticipated template to efficiently parse the results page. 

I have previously referred to this behavior as a “chunking” of the results page. And we already have an idea of what the most promising chunks will be when we launch the search. 

It’s this chunking strategy that’s driving the “pinball” behavior in the Nielsen Norman study.  In the Mediative study, it was somewhat surprising to see that users were clicking on a result in about half the time it took in our original 2005 study. We cover more search territory, but thanks to chunking, we do it much more efficiently.

One Last Time: Learn Information Scent

Finally, let me drag out a soapbox I haven’t used for a while. If you really want to understand search interactions, take the time to learn about Information Scent and how our brains follow it (Information Foraging Theory — Pirolli and Card, 1999 — the link to the original study is also broken. *Insert second head smack, this one harder.). 

This is one area where the Nielsen Norman Group and I are totally aligned. In 2003, Jakob Nielsen — the first N in NNG — called the theory “the most important concept to emerge from human-computer interaction research since 1993.”

On that we can agree.

Why Quitting Facebook is Easier Said than Done

Not too long ago, I was listening to an interview with a privacy expert about… you guessed it, Facebook. The gist of the interview was that Facebook can’t be trusted with our personal data, as it has proven time and again.

But when asked if she would quit Facebook completely because of this — as tech columnist Walt Mossberg did — the expert said something interesting: “I can’t really afford to give up Facebook completely. For me, being able to quit Facebook is a position of privilege.”

Wow!  There is a lot living in that statement. It means Facebook is fundamental to most of our lives — it’s an essential service. But it also means that we don’t trust it — at all.  Which puts Facebook in the same category as banks, cable companies and every level of government.

Facebook — in many minds anyway – became an essential service because of Metcalfe’s Law, which states that the effect of a network is proportional to the square of the number of connected users of the system. More users = exponentially more value. Facebook has Metcalfe’s Law nailed. It has almost two and a half billion users.

But it’s more than just sheer numbers. It’s the nature of engagement. Thanks to a premeditated addictiveness in Facebook’s design, its users are regular users. Of those 2.5 billion users, 1.6 billion log in daily. 1.1 billion log in daily from their mobile device. That means that 15% of all the people in the world are constantly — addictively– connected to Facebook.

And that’s why Facebook appears to be essential. If we need to connect to people, Facebook is the most obvious way to do it. If we have a business, we need Facebook to let our potential customers know what we’re doing. If we belong to a group or organization, we need Facebook to stay in touch with other members. If we are social beasts at all, we need Facebook to keep our social network from fraying away.

We don’t trust Facebook — but we do need it.

Or do we? After all, we homo sapiens have managed to survive for 99.9925% of our collective existence without Facebook. And there is mounting research that indicates  going cold turkey on Facebook is great for your own mental health. But like all things that are good for you, quitting Facebook can be a real pain in the ass.

Last year, New York Times tech writer Brian Chen decided to ditch Facebook. This is a guy who is fully conversant in tech — and even he found making the break is much easier said than done. Facebook, in its malevolent brilliance, has erected some significant barriers to exit for its users if they do try to make a break for it.

This is especially true if you have fallen into the convenient trap of using Facebook’s social sign-in on sites rather than juggling multiple passwords and user IDs. If you’re up for the challenge, Chen has put together a 6-step guide to making a clean break of it.

But what if you happen to use Facebook for advertising? You’ve essentially sold your soul to Zuckerberg. Reading through Chen’s guide, I’ve decided that it’s just easier to go into the Witness Protection Program. Even there, Facebook will still be tracking me.

By the way, after six months without Facebook, Chen did a follow-up on how his life had changed. The short answer is: not much, but what did change was for the better. His family didn’t collapse. His friends didn’t desert him. He still managed to have a social life. He spent a lot less on spontaneous online purchases. And he read more books.

The biggest outcome was that advertisers “gave up on stalking” him. Without a steady stream of personal data from Facebook, Instagram thought he was a woman.

Whether you’re able to swear off Facebook completely or not, I wonder what the continuing meltdown of trust in Facebook will do for its usage patterns. As in most things digital, young people seem to have intuitively stumbled on the best way to use Facebook. Use it if you must to connect to people when you need to (in their case, grandmothers and great-aunts) — but for heaven’s sake, don’t post anything even faintly personal. Never afford Facebook’s AI the briefest glimpse into your soul. No personal affirmations, no confessionals, no motivational posts and — for the love of all that is democratic — nothing political.

Oh, one more thing. Keep your damned finger off of the like button, unless it’s for your cousin Shermy’s 55th birthday celebration in Zihuatanejo.

Even then, maybe it’s time to pick up the phone and call the ol’ Shermeister. It’s been too long.

The Hidden Agenda Behind Zuckerberg’s “Meaningful Interactions”

It probably started with a good intention. Facebook – aka Mark Zuckerberg – wanted to encourage more “Meaningful Interactions”. And so, early last year, Facebook engineers started making some significant changes to the algorithm that determined what you saw in your News Feed. Here are some excerpts from Zuck’s post to that effect:

“The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health. On the other hand, passively reading articles or watching videos — even if they’re entertaining or informative — may not be as good.”

That makes sense, right? It sounds logical. Zuckerberg went on to say how they were changing Facebook’s algorithm to encourage more “Meaningful Interactions.”

“The first changes you’ll see will be in News Feed, where you can expect to see more from your friends, family and groups.

As we roll this out, you’ll see less public content like posts from businesses, brands, and media. And the public content you see more will be held to the same standard — it should encourage meaningful interactions between people.”


Let’s fast-forward almost two years and we now see the outcome of that good intention…an ideological landscape with a huge chasm where the middle ground used to be.

The problem is that Facebook’s algorithm naturally favors content from like-minded people. And surprisingly, it doesn’t take a very high degree of ideological homogeneity to create a highly polarized landscape. This shouldn’t have come as a surprise. American Economist Thomas Schelling showed us how easy it was for segregation to happen almost 50 years ago.

The Schelling Model of Segregation was created to demonstrate why racial segregation was such a chronic problem in the U.S., even given repeated efforts to desegregate. The model showed that even when we’re pretty open minded about who our neighbors are, we will still tend to self-segregate over time.

The model works like this. A grid represents a population with two different types of agents: X and O. The square that the agent is in represents where they live. If the agent is satisfied, they will stay put. If they aren’t satisfied, they will move to a new location. The variable here is the level of satisfaction determined by what percentage of their immediate neighbours are the same type of agent as they are. For example, the level of satisfaction might be set at 50%; where the X agent needs at least 50% of its neighbours to also be of type X. (If you want to try the model firsthand, Frank McCown, a Computer Science professor at Harding University, created an online version.)

The most surprising thing that comes out of the model is that this threshold of satisfaction doesn’t have to be set very high at all for extensive segregation to happen over time. You start to see significant “clumping” of agent types at percentages as low as 25%. At 40% and higher, you see sharp divides between the X and O communities. Remember, even at 40%, that means that Agent X only wants 40% of their neighbours to also be of the X persuasion. They’re okay being surrounded by up to 60% Os. That is much more open-minded than most human agents I know.

Now, let’s move the Schelling Model to Facebook. We know from the model that even pretty open-minded people will physically segregate themselves over time. The difference is that on Facebook, they don’t move to a new part of the grid, they just hit the “unfollow” button. And the segregation isn’t physical – it’s ideological.

This natural behavior is then accelerated by the Facebook “Meaningful Encounter” Algorithm which filters on the basis of people you have connected with, setting in motion an ever-tightening spiral that eventually restricts your feed to a very narrow ideological horizon. The resulting cluster then becomes a segment used for ad targeting. We can quickly see how Facebook both intentionally built these very homogenous clusters by changing their algorithm and then profits from them by providing advertisers the tools to micro target them.

Finally, after doing all this, Facebook absolves themselves of any responsibility to ensure subversive and blatantly false messaging isn’t delivered to these ideologically vulnerable clusters. It’s no wonder comedian Sascha Baron Cohen just took Zuck to task, saying “if Facebook were around in the 1930s, it would have allowed Hitler to post 30-second ads on his ‘solution’ to the ‘Jewish problem’”. 

In rereading Mark Zuckerberg’s post from two years ago, you can’t help but start reading between the lines. First of all, there is mounting evidence that disproves his contention that meaningful social media encounters help your well-being. It appears that quitting Facebook entirely is much better for you.

And secondly, I suspect that – just like his defence of running false and malicious advertising by citing free speech – Zuck has an not-so-hidden agenda here. I’m sure Zuckerberg and his Facebook engineers weren’t oblivious to the fact that their changes to the algorithm would result in nicely segmented psychographic clusters that would be like catnip to advertisers – especially political advertisers. They were consolidating exactly the same vulnerabilities that were exploited by Cambridge Analytica.

They were building a platform that was perfectly suited to subvert democracy.

Running on Empty: Getting Crushed by the Crush It Culture

“Nobody ever changed the world on 40 hours a week.”

Elon Musk

Those damned Protestants and their work ethic. Thanks to them, unless you’re willing to put in a zillion hours a week, you’re just a speed bump on the road to all that is good in the world. Take Mr. Musk, for example. If you happen to work at Tesla, or SpaceX, or the Boring Company, Elon has figured out what your average work week should be, “(It) Varies per person, but about 80 sustained, peaking above 100 at times. Pain level increases exponentially above 80.”

“Pain level increases exponentially above 8o”? WTF, Mr. Musk!

But he’s not alone. Google famously built their Mountainview campus so employees never had to go home. Alibaba Group founder Jack Ma calls the intense work culture at his company a “huge blessing.” He calls it the “996” work schedule, 9 am to 9 pm 6 days a week. That’s 72 hours, if you’re counting. But even that wouldn’t cut it if you work for Elon Musk. You’d be a dead beat.

This is the “Crush It” culture, where long hours equate to dedication and – by extension – success. No pain, no gain.

We spend lots of time talking about the gain – so let me spend just one column talking about the pain. Pain such as mental illness, severe depression, long term disabilities and strokes. Those that overwork are more likely to over-eat, smoke, drink excessively and develop other self-destructive habits.

You’re not changing the world. You’re shortening your life. The Japanese call it karoshi; death by overwork.

Like so many things, this is another unintended consequence of a digitally mediated culture. Digital speeds everything up. But our bodies – and brains – aren’t digital. They burn out if they move too fast – or too long.

Overwork as a sign of superior personal value is a fairly new concept in the span of human history. It came from the Puritans who settled in New England. They believed that those that worked hard at their professions were those chosen to get into heaven. The more wealth you amassed from your work, the more evidence there was that you were one of the chosen.

Lately, the creeping Capitalist culture of over-working has most firmly embedded itself in the tech industry. There, the number of hours you work has become a proxy of your own worth. A twisted type of machismo has evolved and has trapped us all into thinking that an hour not spent at our jobs is an hour wasted. We are looked down upon for wanting some type of balance in our lives.

Unfortunately for the Musks and Mas and other modern-day task masters – the biology just doesn’t support their proposed work schedules.

First, our brains need rest. Back in the 18th century when those Puritans proved their worth through work, earning a living was usually a physical endeavour. The load of overwork was spread amongst the fairly simple mechanical machinery of our own bodies. Muscles got sore. Joints ached. But they recovered.

The brain is a much more complex beast. When it gets overworked, it loses its executive ability to focus on the task at hand. When your work takes place on a desktop or laptop where there are unlimited diversions just a click away, you suddenly find yourself 45 minutes into an unplanned YouTube marathon or scrolling through your Facebook feed. It becomes a downward spiral that benefits no one.

An overworked mind also loses its ability to spin down in the evening so you can get an adequate amount of sleep. When your co-workers start boasting of being able to function on just 3 or 4 hours of sleep – they are lying. They are lying to you, but worse, they are lying to themselves. Very few of us can function adequately on less than 7 or 8 hours of sleep. For the rest of us, the negative effects start to accumulate. A study found that sleep deprivation has the same impact as drinking too much. Those that were getting less than 7 hours of sleep faired the same or worse on a cognitive test as those that had a 0.05% blood alcohol level. The legal limit in most states is 0.08%.

Finally, in an essay on Medium, Rachel Thomas points out that the Crush It Culture is discriminatory. Those that have a disability or chronic illness simply have fewer hours in the day to devote to work. They need time for medical support and usually require more sleep. In an industry like Tech where there is an unhealthy focus on the number of hours worked, these workers – which Thomas says makes up at least 30% of the total workforce – are shut out.

The Crush It Culture is toxic. The science simply doesn’t support it. The only ones evangelizing it are those that directly benefit from this modernized version of feudalism.  It’s time to call Bullshit on them.