The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

I Was So Wrong in 1996…

It’s that time of year – the time when we sprain our neck trying to look backwards and forwards at the same time. Your email inbox, like mine, is probably crammed with 2021 recaps and 2022 predictions.

I’ve given up on predictions. I have a horrible track record. In just a few seconds, I’ll tell you how horrible. But here, at the beginning of 2022, I will look back. And I will substantially overshoot “a year in review” by going back all the way til 1996, 26 years ago. Let me tell you why I’m in the mood for some reminiscing.

In amongst the afore-mentioned “look back” and “look forward” items I saw recently there was something else that hit my radar; a number of companies looking for SEO directors. After being out of the industry for almost 10 years, I was mildly surprised that SEO still seemed to be a rock solid career choice. And that brings me both to my story about 1996 and what was probably my worst prediction about the future of digital marketing.

It was in late 1996 that I first started thinking about optimizing sites for the search engines and directories of the time: Infoseek, Yahoo, Excite, Lycos, Altavista, Looksmart and Hotbot. Early in 1997 I discovered Danny Sullivan’s Webmaster’s Guide to Search Engines. It was revelatory. After much trial and error, I was reasonably certain I could get sites ranking for pretty much any term. We had our handful of local clients ranking on Page One of those sites for terms like “boats,” “hotels”, “motels”, “men’s shirts” and “Ford Mustang”. It was the Wild West. Small and nimble web starts ups were routinely kicking Fortune 500 ass in the digital frontier.   

As a local agency that had played around with web design while doing traditional marketing, I was intrigued by this opportunity. Somewhere near the end of 1997 I did an internal manifesto where I speculated on the future of this “Internet” thing and what it might mean for our tiny agency (I had just brought on board my eventual partner, Bill Barnes, and we had one other full-time employee). I wish I could find that original document, but I remember saying something to the effect of, “This search engine opportunity will probably only last a year or two until the engines crack down and close the loopholes.” Given that, we decided to go for broke and seize that opportunity.

In 1998 we registered the domain www.searchengineposition.com. This was a big step. If you could get your main keywords in your domain name, it virtually guaranteed you link juice. At that time, “Search engine optimization” hadn’t emerged as the industry label. Search engine positioning was the more common term. We couldn’t get www.searchenginepositioning.com because domain names were limited by the number of characters you could use.

We had our domain and soon we had a site. We needed all the help we could get, because according to my prediction, we only had until 2000 or so to make as much as we could from this whole “search thing.” The rest, as they say, was history. It just wasn’t the history I had predicted.

To be fair, I wasn’t the only one making shitty predictions at the time. In 1995, 3Com co-founder Robert Metcalfe (also the co-inventor of Ethernet) said in a column in Infoworld:

“Almost all of the many predictions now being made about 1996 hinge on the Internet’s continuing exponential growth. But I predict the Internet, which only just recently got this section here in InfoWorld, will soon go spectacularly supernova and in 1996 catastrophically collapse.”

And in 1998, Nobel prize winning economist Paul Krugman said,

“The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’ becomes apparent: most people have nothing to say to each other! By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”

Both of those people were way smarter than I was, so if I was clueless about the future, at least I was in good company.

As we now know, SEO would be fine, thank you very much. In 2004, some 6 years later, in my very first post for MediaPost, I wrote:

“I believe years from now that…2004 … will be a milestone in the (Search) industry. I think it will mark the beginning of a year that will dramatically alter the nature of search marketing.”

That prediction, as it turned out, was a little more accurate. In 2004, Google’s AdWords program really hit its stride, doubling revenue from 1.5 billion the previous year to $3 billion and starting its hockey stick climb up to its current level, just south of $150 billion (in 2020).

The reason search – and organic search optimization – never fizzled out was that it was a fundamental connection between user intent and the ever-expanding ocean of available content. Search Engine Optimization turned out to be a much better label for the industry than Search Engine Positioning, despite my unfortunate choice of domain names. The later was really an attempt to game the algorithms. The former was making sure content was findable and indexable. Hindsight has shown that it was a much more sustainable approach.

I ended that first post talking about the search industry of 2004 by saying,

“And to think, one day I’ll be able to say I was there.”

I guess today is that day.

Whatever Happened to the Google of 2001?

Having lived through it, I can say that the decade from 2000 to 2010 was an exceptional time in corporate history. I was reminded of this as I was reading media critic and journalist Ken Auletta’s book, “Googled, The End of the World as We Know It.” Auletta, along with many others, sensed a seismic disruption in the way media worked. A ton of books came out on this topic in the same time frame, and Google was the company most often singled out as the cause of the disruption.

Auletta’s book was published in 2009, near the end of this decade, and it’s interesting reading it in light of the decade plus that has passed since. There was a sort of breathless urgency in the telling of the story, a sense that this was ground zero of a shift that would be historic in scope. The very choice of Auletta’s title reinforces this: “The End of the World as We Know It.”

So, with 10 years plus of hindsight, was he right? Did the world we knew end?

Well, yes. And Google certainly contributed to this. But it probably didn’t change in quite the way Auletta hinted at. If anything, Facebook ended up having a more dramatic impact on how we think of media, but not in a good way.

At the time, we all watched Google take its first steps as a corporation with a mixture of incredulous awe and not a small amount of schadenfreude. Larry Page and Sergey Brin were determined to do it their own way.

We in the search marketing industry had front row seats to this. We attended social mixers on the Google campus. We rubbed elbows at industry events with Page, Brin, Eric Schmidt, Marissa Mayer, Matt Cutts, Tim Armstrong, Craig Silverstein, Sheryl Sandberg and many others profiled in the book. What they were trying to do seemed a little insane, but we all hoped it would work out.

We wanted a disruptive and successful company to not be evil. We welcomed its determination — even if it seemed naïve — to completely upend the worlds of media and advertising. We even admired Google’s total disregard for marketing as a corporate priority.

But there was no small amount of hubris at the Googleplex — and for this reason, we also hedged our hopeful bets with just enough cynicism to be able to say “we told you so” if it all came crashing down.

In that decade, everything seemed so audacious and brashly hopeful. It seemed like ideological optimism might — just might — rewrite the corporate rulebook. If a revolution did take place, we wanted to be close enough to golf clap the revolutionaries onward without getting directly in the line of fire ourselves.

Of course, we know now that what took place wasn’t nearly that dramatic. Google became a business: a very successful business with shareholders, a grown-up CEO and a board of directors, but still a business not all that dissimilar to other Fortune 100 examples. Yes, Google did change the world, but the world also changed Google. What we got was more evolution than revolution.

The optimism of 2000 to 2010 would be ground down in the next 10 years by the same forces that have been driving corporate America for the past 200 years: the need to expand markets, maximize profits and keep shareholders happy. The brash ideologies of founders would eventually morph to accommodate ad-supported revenue models.

As we now know, the world was changed by the introduction of ways to make advertising even more pervasively influential and potentially harmful. The technological promise of 20 years ago has been subverted to screw with the very fabric of our culture.

I didn’t see that coming back in 2001. I probably should have known better.

How We Forage for the News We Want

Reuters Institute out of the UK just released a comprehensive study looking at how people around the world are finding their news. There is a lot here, so I’ll break it into pieces over a few columns and look at the most interesting aspects. Today, I’ll look at the 50,000-foot view, which can best be summarized as a dysfunctional relationship between our news sources and ourselves. And like most dysfunctional relationships, the culprit here is a lack of trust.

Before we dive in, we should spend some time looking at how the way we access news has changed over the last several years.

Over my lifetime, we have trended in two general directions – less cognitively demanding news channels and less destination specific news sources. The most obvious shift has been away from print. According to Journalism.org and the Pew Research Center, circulation of U.S. Daily newspapers peaked around 1990, at about 62 and a half million. That’s one subscription for every 4 people in the country at that time.

In 2018, it was projected that circulation had dropped more than 50%, to less than 30 million. That would have been one subscription for every 10 people. We were no longer reading our news in a non-digital format. And that may have significant impact on our understanding of the news. I’ll return to this in another column, but for now, let’s just understand that our brain operates in a significantly different way when it’s reading rather than watching or listening.

Up the end of the last century, we generally trusted news destinations. Whether it be a daily newspaper like the New York Times, a news magazine like Time or a nightly newscast such as any of the network news shows, each was a destination that offered one thing above all others – the news. And whether you agreed with them or not, each had an editorial process that governed what news was shared. We had a loyalty to our chosen news destinations that was built on trust.

Over the past two decades, this trust has broken down due to one primary factor – our continuing use of social media. And that has dramatically shifted how we get our news.

In the US, three out of every four people use online sources to get their news. One in two use social media.  Those aged 18 to 24 are more than twice as likely to rely on social media. In the UK, under-35s get more of their news from Social Media than any other source.

Also, influencers have become a source of news, particularly amongst young people. In the US, a quarter of those 18 to 24 used Instagram as a source of news about COVID.

This means that most times, we’re getting our news through a social media lens. Let’s set aside for a moment the filtering and information veracity problems that introduces. Let’s just talk about intent for a moment.

I have talked extensively in the past about information foraging when it comes to search. When information is “patchy” and spread diversely, the brain has to make a quickly calculated guess about which patch it’s most likely to find the information in it’s looking for. With Information Foraging, the intent we have frames everything that comes after.

In today’s digital world, information sources have disaggregated into profoundly patchy environments. We still go to news-first destinations like CNN or Fox News but we also get much of our information about the world through our social media feeds. What was interesting about the Reuters report was that it was started before the COVID pandemic, but the second part of the study was conducted during COVID. And it highlights a fascinating truth about our relationship with the news when it comes to trust.

The study shows that the majority of us don’t trust the news we get through social media but most times, we’re okay with that. Less than 40% of people trust the news in general, and even when we pick a source, less than half of us trust that particular channel. Only 22% indicated they trust the news they see in social media. Yet half of us admit we use social media to get our news. The younger we are, the more reliant we are on social media for news. The fastest growing sources for news amongst all age groups – but especially those under 30 – are Instagram, SnapChat and WhatsApp.

Here’s another troubling fact that fell out of the study. Social platforms, especially Instagram and SnapChat, are dominated by influencers. That means that much of our news comes to us by way of a celebrity influencer reposting it on their feed. This is a far cry from the editorial review process that used to act as a gate keeper on our trusted news sources.

So why do we continue to use news sources we admit we don’t trust? I suspect it may have to do with something called the Meaning Maintenance Model. Proposed in 2006 by Heine, Proulx and Vohs, the model speculates that a primary driver for us is to maintain our beliefs in how the world works. This is related to the sense making loop (Klein, Moon and Hoffman) I’ve also talked about in the past. We make sense of the world by first starting with the existing frame of what we believe to be true. If what we’re experiencing is significantly different from what we believe, we will update our frame to align with the new evidence.

What the Meaning Maintenance Model suggests is that we will go to great lengths to avoid updating our frame. It’s much easier just to find supposed evidence that supports our current beliefs. So, if our intent is to get news that supports our existing world view, social media is the perfect source. It’s algorithmically filtered to match our current frame. Even if we believe the information is suspect, it still comforts us to have our beliefs confirmed. This works well for news about politics, societal concerns and other ideologically polarized topics.

We don’t like to admit this is the case. According to the Reuter’s study, 60% of us indicate we want news sources that are objective and not biased to any particular point of view. But this doesn’t jive with reality at all. As I wrote about in a previous column, almost all mainstream news sources in the US appear to have a significant bias to the right or left. If we’re talking about news that comes through social media channels, that bias is doubled down on. In practice, we are quite happy foraging from news sources that are biased, as long as that bias matches our own.

But then something like COVID comes along. Suddenly, we all have skin in the game in a very real and immediate way. Our information foraging intent changes and our minimum threshold for the reliability of our news sources goes way up. The Reuters study found that when it comes to sourcing COVID information, the most trusted sources are official sites of health and scientific organizations. The least trusted sources are random strangers, social media and messaging apps.

It requires some reading between the lines, but the Reuters study paints a troubling picture of the state of journalism and our relationship with it. Where we get our information directly impacts what we believe. And what we believe determines what we do.

These are high stakes in an all-in game of survival.

Saying So Long to SEMPO

Yesterday afternoon, while I was in line at the grocery store, my phone pinged. I was mentioned in a Twitter post. For me, that’s becoming a pretty uncommon experience. So I checked the post.  And that’s how I found out that SEMPO is no more.

The tweet was from Dana Todd, who was responding to a Search Engine Journalarticle by Roger Montti about the demise of SEMPO. For those of you who don’t know SEMPO: it was the Search Engine Marketing Professionals Organization.

It was a big part of my life during what seems like a lifetime ago. Todd was even more involved. Hence the tweet.

Increasingly I find my remaining half-life in digital consists of an infrequent series of “remember-when” throwbacks. This will be one of those.

Todd’s issue with the article was that much of the 17-year history of the organization was glossed over, as Montti chose to focus mainly on the controversies of the first year or two of its existence.

As Todd said, “You only dredged up the early stages of the organization, in its infancy as we struggled to gain respect and traction, and were beset by naysayers who looked for a reason we should fail. We didn’t fail.”

She then added, “There is far more to the SEMPO story, and far more notable people who put in blood sweat and tears to build not just the organization, but the entire industry.”

I was one of those people. But before that, I was also one of the early naysayers.

SEMPO started in 2003. I didn’t join until 2004. I spent at least part of that first year joining the chorus bitching about the organization. And then I realized that I could either bitch from the outside — or I could effect change from the inside.  

After joining, I quickly found myself on that same SEMPO board that that I’d been complaining about. In 2005, I became co-chair of the research committee. In 2006, I became the chair of SEMPO. I served in that role for two years and eventually stepped down from SEMPO at the same time I stepped away from the search industry.

Like Todd (who was the president of SEMPO for part of the time I was the chairman), I am proud of what we did, and extraordinarily proud of the team that made it happen. Many of the people I admired most in the industry served with me on that board.

Todd will always be one of my favorite search people. But I also had the privilege of serving with Jeff Pruit, Kevin Lee, Bill Hunt, Dave Fall, Christine Churchill and the person who got the SEMPO ball rolling, along with Todd: Barbara Coll. There were many, many others.

Now, SEMPO is being absorbed by the Digital Analytics Association, which, according to its announcement,  “is committed to helping former SEMPO members become fully integrated into DAA, and will be forming a special interest group (SIG) for search analytics.”

I’ve got to admit: That hurts. Being swallowed up, becoming nothing more than a special interest group, is a rather ignoble end for the association I gave so much to.

But as anyone who has raised a child can tell you, you know you’ve been successful when they no longer need you. And that’s how I choose to interpret this event. The search industry no longer needs SEMPO, at least as a stand-alone organization.

And if that’s the case, then SEMPO knocked it out of the park. Because that sure as hell wasn’t true back in 2003.

Search in 2003 was the Wild West. According to legend, there were white-hat SEOs and black-hat SEOs.

But truth be told, most of us wore hats that were some shade of grey.

The gunslingers of natural search (or organic SEO) were slowly and very reluctantly giving up their turf to the encroaching new merchants of paid search. Google Adwords had only been around for three years, but its launch introduced a whole new dynamic to the ecosystem. Google suddenly had to start a relationship with search marketers.

Before that, the only Google attempt made to reach out was thanks to a rogue mystery poster on SEO industry forums named “googleguy” (later suspected to be the search quality team lead Matt Cutts).  To call search an industry would be stretching the term to its breaking point.

The introduction of paid search was creating a two-sided marketplace, and that was forcing search to become more civilized.

The process of civilization is always difficult. It requires the establishment of trust and respect, two commodities that were in desperately short supply in search circa 2003.

SEMPO was the one organization that did the most to bring civilization to the search marketplace. It gave Google a more efficient global conduit to thousands of search marketers. And it gave those search marketers a voice that Google would actually pay some attention to.

But it was more than just starting a conversation. SEMPO challenged search marketers to think beyond their own interests. The organization laid the foundation for a more sustainable and equitable search ecosystem. If SEMPO accomplished anything to be proud of, it was in preventing the Tragedy of the Commons from killing search before it had a chance to establish itself as the fastest growing advertising marketplace in history.

Dana Todd wrapped up her extended Twitter post by writing, “I can say confidently Google wouldn’t be worth $1T without us. SEMPO — you mattered.”

Dana, just like in the old SEMPO days when we double-teamed a message, you said it better than I ever could.

And Google? You’re welcome.

Just in Time for Christmas: More Search Eye-Tracking

The good folks over at the Nielsen Norman Group have released a new search eye tracking report. The findings are quite similar to one my former company — Mediative — did a number of years ago (this link goes to a write-up about the study. Unfortunately, the link to the original study is broken. *Insert head smack here).

In the Nielsen Norman study, the two authors — Kate Moran and Cami Goray — looked at how a more visually rich and complex search results page would impact user interaction with the page. The authors of the report called the sum of participant interactions a “Pinball Pattern”: “Today, we find that people’s attention is distributed on the page and that they process results more nonlinearly than before. We observed so much bouncing between various elements across the page that we can safely define a new SERP-processing gaze pattern — the pinball pattern.”

While I covered this at some length when the original Mediative report came out in 2014 (in three separate columns: 1,2 & 3), there are some themes that bear repeating. Unfortunately, I found the study’s authors missed what I think are some of the more interesting implications. 

In the days of the “10 Blue Links” search results page, we used the same scanning strategy no matter what our intent was. In an environment where the format never changes, you can afford to rely on a stable and consistent strategy. 

In our first eye tracking study, published in 2004, this consistent strategy led to something we called the Golden Triangle. But those days are over.

Today, when every search result can look a little bit different, it comes as no surprise that every search “gaze plot” (the path the eyes take through the results page) will also be different. Let’s take a closer look at the reasons for this. 

SERP Eye Candy

In the Nielsen Norman study, the authors felt “visual weighting” was the main factor in creating the “Pinball Pattern”: “The visual weight of elements on the page drives people’s scanning patterns. Because these elements are distributed all over the page and because some SERPs have more such elements than others, people’s gaze patterns are not linear. The presence and position of visually compelling elements often affect the visibility of the organic results near them.”

While the visual impact of the page elements is certainly a factor, I think it’s only part of the answer. I believe a bigger, and more interesting, factor is how the searcher’s brain and its searching strategies have evolved in lockstep with a more visually complex results page. 

The Importance of Understanding Intent

The reason why we see so much variation in scan patterns is that there is also extensive variation in searchers’ intent. The exact same search query could be used by someone intent on finding an online or physical place to purchase a product, comparing prices on that product, looking to learn more about the technical specs of that product, looking for how-to videos on the use of the product, or looking for consumer reviews on that product.

It’s the same search, but with many different intents. And each of those intents will result in a different scanning pattern. 

Predetermined Page Visualizations

I really don’t believe we start each search page interaction with a blank slate, passively letting our eyes be dragged to the brightest, shiniest object on the page. I think that when we launch the search, our intent has already created an imagined template for the page we expect to see. 

We have all used search enough to be fairly accurate at predicting what the page elements might be: thumbnails of videos or images, a map showing relevant local results, perhaps a Knowledge Graph result in the lefthand column. 

Yes, the visual weighting of elements act as an anchor to draw the eye, but I believe the eye is using this anticipated template to efficiently parse the results page. 

I have previously referred to this behavior as a “chunking” of the results page. And we already have an idea of what the most promising chunks will be when we launch the search. 

It’s this chunking strategy that’s driving the “pinball” behavior in the Nielsen Norman study.  In the Mediative study, it was somewhat surprising to see that users were clicking on a result in about half the time it took in our original 2005 study. We cover more search territory, but thanks to chunking, we do it much more efficiently.

One Last Time: Learn Information Scent

Finally, let me drag out a soapbox I haven’t used for a while. If you really want to understand search interactions, take the time to learn about Information Scent and how our brains follow it (Information Foraging Theory — Pirolli and Card, 1999 — the link to the original study is also broken. *Insert second head smack, this one harder.). 

This is one area where the Nielsen Norman Group and I are totally aligned. In 2003, Jakob Nielsen — the first N in NNG — called the theory “the most important concept to emerge from human-computer interaction research since 1993.”

On that we can agree.

Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.

Search and The Path to Purchase

Just how short do we want the path to purchase to be anyway?

A few weeks back Mediapost reporter Laurie Sullivan brought this question up in her article detailing how Instagram is building ecom into their app. While Instagram is not usually considered a search platform, Sullivan muses on the connecting of two dots that seem destined to be joined: search and purchase. But is that a destiny that users can “buy into?”

Again, this is one of those questions where the answer is always, “It depends.”  And there are at least a few dependencies in this case.

The first is whether our perspective is as a marketer or a consumer. Marketers always want the path to purchase to be as short as possible. When we have that hat on, we won’t be fully satisfied until the package hits our front step about the same time we first get the first mental inkling to buy.

Amazon has done the most to truncate the path to purchase. Marketers look longingly at their one click ordering path – requiring mere seconds and a single click to go from search to successful fulfillment. If only all purchases were this streamlined, the marketer in us muses.

But if we’re leading our double life as a consumer, there is a second “It depends…”  And that is dependent on what our shopping intentions are. There are times when we – as consumers – also want to fastest possible path to purchase. But that’s not true all the time.

Back when I was looking at purchase behaviors in the B2B world, I found that there are variables that lead to different intentions on the part of the buyer. Essentially, it boils down to the degree of risk and reward in the purchase itself. I first wrote about this almost a decade ago now.

If there’s a fairly high degree of risk inherent in the purchase itself, the last thing we want is a frictionless path to purchase. These are what we call high consideration purchases.

We want to take our time, feeling that we’ve considered all the options. One click ordering scares the bejeezus out of us.

Let’s go back to the Amazon example. Today, Amazon is the default search engine of choice for product searches, outpacing Google by a margin rapidly approaching double digits. But this is not really an apples to apples comparison. We have to factor in the deliberate intention of the user. We go to Amazon to buy, so a faster path to purchase is appropriate. We go to Google to consider. And for reasons I’ll get into soon, we would be less accepting of a “buy” button there.

The buying paths we would typically take in a social platform like Instagram are probably not that high risk, so a fast path to purchase might be fine. But there’s another factor that we need to consider when shortening the path to purchase – or buiding a path in the first place – in what has traditionally been considered a discovery platform. Let’s call it a mixing of motives.

Google has been dancing around a shorter path to purchase for years now. As Sullivan said in her article, “Search engines have strength in what’s known as discovery shopping, but completing the transaction has never been a strong point — mainly because brands decline to give up the ownership of the data.”

Data ownership is one thing, but even if the data were available, including a “buy now” button in search results can also lead to user trust issues. For many purchases, we need to feel that our discovery engine has no financial motive in the ordering of their search results. This – of course – is a fallacy we build in our own minds. There is always a financial motive in the ordering of search results. But as long as it’s not overt, we can trick ourselves into living with it. A “buy now” button makes it overt.

This problem of mixed motives is not just a problem of user perception. It also can lead publishers down a path that leaves objectivity behind and pursues higher profits ahead. One example is TripAdvisor. Some years ago, they made the corporate decision to parlay their strong position as a travel experience discovery platform into an instant booking platform. In the beginning, they separated this booking experience onto its own platform under the brand Viator. Today, the booking experience has been folded into the main TripAdvisor results and – more disturbingly – is now the default search order. Every result at the top of the page has a “Book Now” button.

Speaking as a sample of one, I trust TripAdvisor a lot less than I used to.

 

Is Google Politically Biased?

As a company, the answer is almost assuredly yes.

But are the search results biased? That’s a much more nuanced question.

Sundar Pinchai testifying before congress

In trying to answer that question last week, Google CEO Sundar Pinchai tried to explain how Google’s algorithm works to Congress’s House Judiciary Committee (which kind of like God explaining how the universe works to my sock, but I digress). One of the catalysts for this latest appearance of a tech was another one of President Trump’s ranting tweets that intimated something was rotten in the Valley of the Silicon:

Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal? 96% of … results on ‘Trump News’ are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

Granted, this tweet is non-factual, devoid of any type of evidence and verging on frothing at the mouth. As just one example, let’s take the 96% number that Trump quotes in the above tweet. That came from a very unscientific straw poll that was done by one reporter on a far right-leaning site called PJ Media. In effect, Trump did exactly what he accuses of Google doing – he cherry-picked his source and called it a fact.

But what Trump has inadvertently put his finger on is the uneasy balance that Google tries to maintain as both a search engine and a publisher. And that’s where the question becomes cloudy. It’s a moral precipice that may be clear in the minds of Google engineers and executives, but it’s far from that in ours.

Google has gone on the record as ensuring their algorithm is apolitical. But based on a recent interview with Google News head Richard Gingras, there is some wiggle room in that assertion. Gingras stated,

“With Google Search, Google News, our platform is the open web itself. We’re not arbiters of truth. We’re not trying to determine what’s good information and what’s not. When I look at Google Search, for instance, our objective – people come to us for answers, and we’re very good at giving them answers. But with many questions, particularly in the area of news and public policy, there is not one single answer. So we see our role as [to] give our users, citizens, the tools and information they need – in an assiduously apolitical fashion – to develop their own critical thinking and hopefully form a more informed opinion.”

But –  in the same interview – he says,

“What we will always do is bias the efforts as best we can toward authoritative content – particularly in the context of breaking news events, because major crises do tend to attract the bad actors.”

So Google does boost news sites that it feels are reputable and it’s these sites – like CNN –  that typically dominate in the results. Do reputable news sources tend to lean left? Probably. But that isn’t Google’s fault. That’s the nature of Open Web. If you use that as your platform, you build in any inherent biases. And the minute you further filter on top of that platform, you leave yourself open to accusations of editorializing.

There is another piece to this puzzle. The fact is that searches on Google are biased, but that bias is entirely intentional. The bias in this case is yours. Search results have been personalized so that they’re more relevant to you. Things like your location, your past search history, the way you structure your query and a number of other signals will be used by Google to filter the results you’re shown. There is no liberal conspiracy. It’s just the way that the search algorithm works. In this way, Google is prone to the same type of filter-bubble problem that Facebook has.  In another interview with Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, he touches on this:

“I was struck by the idea that whereas those arguments seem to work as late as only just a few years ago, they’re increasingly ringing hollow, not just on the side of the conservatives, but also on the liberal side of things as well. And so what I think we’re seeing here is really this view becoming mainstream that these platforms are in fact not neutral, and that they are not providing some objective truth.”

The biggest challenge here lies not in the reality of what Google is or how it works, but in what our perception of Google is. We will never know the inner workings of the Google algorithm, but we do trust in what Google shows us. A lot. In our own research some years ago, we saw a significant lift in consumer trust when brands showed up on top of search results. And this effect was replicated in a recent study that looked at Google’s impact on political beliefs. This study found that voter preferences can shift by as much as 20% due to biased search rankings – and that effect can be even higher in some demographic groups.

If you are the number one channel for information, if you manipulate the ranking of the information in any way and if you wield the power to change a significant percentage of minds based on that ranking – guess what? You are the arbitrator of truth. Like it or not.

Deconstructing the Google/Facebook Duopoly

We’ve all heard about it. The Google/Facebook Duopoly. This was what I was going to write about last week before I got sidetracked. I’m back on track now (or, at least, somewhat back on track). So let’s start by understanding what a duopoly is…

…a situation in which two suppliers dominate the market for a commodity or service.

And this, from Wikipedia…

… In practice, the term is also used where two firms have dominant control over a market.

So, to have a duopoly, you need two things: domination and control. First, let’s deal with the domination question. In 2017, Google and Facebook together took a very healthy 59% slice of all digital ad revenues in the US. Google captured 38.6% of that, with Facebook capturing 20%. That certainly seems dominant. But if online marketing is the market, that is a very large basket with a lot of different items thrown in. So, let’s do a broad categorization to help deconstruct this a bit.  Typically, when I try to understand marketing, I like to start with humans – or more specifically – what that lump of grey matter we call a brain is doing. And if we’re talking about marketing, we’re talking about attention – how our brains are engaging with our environment. That is an interesting way to divide up the market we’re talking about, because it neatly bisects the attentional market, with Google on one side and Facebook on the other.

Google dominates the top down, intent driven, attentionally focused market. If you’re part of this market, you have something in mind and you’re trying to find it. If we use search as a proxy for this attentional state (which is the best proxy I can think of) we see just how dominate Google is. It owns this market to a huge degree. According to Statista, Google has about 87% of the total worldwide search market in April of 2018. The key metric here is success. Google needs to be the best way to fulfill those searches. And if market share is any indication, it is.

Facebook apparently dominates the bottom up awareness market. These are the people killing time online and they are not actively looking with commercial intent. This is more of an awareness play where attention has to be diverted to an advertising message. Therefore, time spent becomes the key factor. You need to be in front of the right eyeballs, and so you need a lot of eyeballs and a way to target to the right ones.

Here is where things get interesting. If we look at share of consumer time, Google dominates here. But there is a huge caveat, which I’ll get to in a second. According to a report this spring by Pivotal Research, Google owns just under 28% of all the time we spend consuming digital content. Facebook has just over a 16% share of this market. So why do we have a duopoly and not a monopoly? It’s because of that caveat – a whopping slice of Google’s “time spent” dominance comes from YouTube. And YouTube has an entirely different attentional profile – one that’s much harder to present advertising against. When you’re watching a video on YouTube, your attention is “locked” on the video. Disrupting that attention erodes the user experience. So Google has had a tough time monetizing YouTube.

According to Seeking Alpha, Google’s search ad business will account for 68% of their total revenue of $77 billion this year. That’s over 52 billion dollars that is in that “top-down” attentionally focused bucket. YouTube, which is very much in the “bottom-up” disruptive bucket, accounts for $12 Billion in advertising revenues. Certainly nothing to sneeze at, but not on the same scale as Google’s search business. Facebook’s revenue, at about $36 B, is also generated by this same “bottom up” market, but they have a different attentional profile. The Facebook user is not as “locked in” as they are on YouTube. With the right targeting tools, something that Facebook has excelled at, you have a decent chance of gaining their attention long enough to notice your ad.

Domination

If we look at the second part of the definition of a duopoly – that of control – we see some potential chinks in the armor of both Google and Facebook. Typically, market control was in the form of physical constraints against the competition. But in this new type of market, the control can only be in the minds of the users. The barriers to competitive entry are all defined in mental terms.

In  Google’s case, they have a single line of defense: they have to be an unbreakable habit. Habits are mental scripts that depend on two things – obvious environmental cues that trigger habitual behavior and acceptable outcomes once the script completes. So, to maintain their habit, Google has to ensure that whatever environment you might be in when searching online for something, Google is just a click or two away. Additionally, they have to meet a certain threshold of success. Habits are tough to break, but there are two areas of vulnerability in Google’s dominance.

Facebook is a little different. They need to be addictive. This is a habit taken to the extreme. Addictions depend on pushing certain reward buttons in the brain that lead to an unhealthy behavioral script which become obsessive. The more addicted you are to Facebook and its properties, the more successful they will be in their dominance of the market. You can see the inherent contradiction here. Despite Facebook’s protests to the contrary, with their current revenue model they can only succeed at the expense of our mental health.

I find these things troubling. When you have two for-profit organizations fighting to dominate a market that is defined in our own minds, you have the potential for a lot of unhealthy corporate decisions.