Why Elizabeth Warren Wants to Break Up Big Tech

Earlier this year, Democratic Presidential Candidate Elizabeth Warren posted an online missive in which she laid out her plans to break up big tech (notably Amazon, Google and Facebook). In it, she noted:

“Today’s big tech companies have too much power — too much power over our economy, our society, and our democracy. They’ve bulldozed competition, used our private information for profit, and tilted the playing field against everyone else. And in the process, they have hurt small businesses and stifled innovation.”

We, here in the west, are big believers in Adam Smith’s Invisible Hand. We inherently believe that markets will self-regulate and eventually balance themselves. We are loath to involve government in the running of a free market.

In introducing the concept of the Invisible Hand, Smith speculated that,  

“[The rich] consume little more than the poor, and in spite of their natural selfishness and rapacity…they divide with the poor the produce of all their improvements. They are led by an invisible hand to make nearly the same distribution of the necessaries of life, which would have been made, had the earth been divided into equal portions among all its inhabitants, and thus without intending it, without knowing it, advance the interest of the society, and afford means to the multiplication of the species.”

In short, a rising tide raises all boats. But there is a dicey little dilemma buried in the midst of the Invisible Hand Premise – summed up most succinctly by the fictitious Gordon Gekko in the 1987 movie Wall Street: “Greed is Good.”

More eloquently, economist and Nobel laureate Milton Friedman explained it like this:

“The great virtue of a free market system is that it does not care what color people are; it does not care what their religion is; it only cares whether they can produce something you want to buy. It is the most effective system we have discovered to enable people who hate one another to deal with one another and help one another.” 

But here’s the thing. Up until very recently, the concept of the Invisible Hand dealt only with physical goods. It was all about maximizing tangible resources and distributing them to the greatest number of people in the most efficient way possible.

The difference now is that we’re not just talking about toasters or running shoes. Physical things are not the stock in trade of Facebook or Google. They deal in information, feelings, emotions, beliefs and desires. We are not talking about hardware any longer, we are talking about the very operating system of our society. The thing that guides the Invisible Hand is no longer consumption, it’s influence. And, in that case, we have to wonder if we’re willing to trust our future to the conscience of a corporation?

For this reason, I suspect Warren might be right. All the past arguments for keeping government out of business were all based on a physical market. When we shift that to a market that peddles influence, those arguments are flipped on their head. Milton Friedman himself said , “It (the corporation) only cares whether they can produce something you want to buy.” Let’s shift that to today’s world and apply it to a corporation like Facebook – “It only cares whether they can produce something that captures your attention.” To expect anything else from a corporation that peddles persuasion is to expect too much.

The problem with Warren’s argument is that she is still using the language of a market that dealt with consumable products. She wants to break up a monopoly that is limiting competition. And she is targeting that message to an audience that generally believes that big government and free markets don’t mix.

The much, much bigger issue here is that even if you believe in the efficacy of the Invisible Hand, as described by all believers from Smith to Friedman, you also have to believe that the single purpose of a corporation that relies on selling persuasion will be to influence even more people more effectively. None of most fervent evangelists of the Invisible Hand ever argued that corporations have a conscience. They simply stated that the interests of a profit driven company and an audience intent on consumption were typically aligned.

We’re now playing a different game with significantly different rules.

Search and The Path to Purchase

Just how short do we want the path to purchase to be anyway?

A few weeks back Mediapost reporter Laurie Sullivan brought this question up in her article detailing how Instagram is building ecom into their app. While Instagram is not usually considered a search platform, Sullivan muses on the connecting of two dots that seem destined to be joined: search and purchase. But is that a destiny that users can “buy into?”

Again, this is one of those questions where the answer is always, “It depends.”  And there are at least a few dependencies in this case.

The first is whether our perspective is as a marketer or a consumer. Marketers always want the path to purchase to be as short as possible. When we have that hat on, we won’t be fully satisfied until the package hits our front step about the same time we first get the first mental inkling to buy.

Amazon has done the most to truncate the path to purchase. Marketers look longingly at their one click ordering path – requiring mere seconds and a single click to go from search to successful fulfillment. If only all purchases were this streamlined, the marketer in us muses.

But if we’re leading our double life as a consumer, there is a second “It depends…”  And that is dependent on what our shopping intentions are. There are times when we – as consumers – also want to fastest possible path to purchase. But that’s not true all the time.

Back when I was looking at purchase behaviors in the B2B world, I found that there are variables that lead to different intentions on the part of the buyer. Essentially, it boils down to the degree of risk and reward in the purchase itself. I first wrote about this almost a decade ago now.

If there’s a fairly high degree of risk inherent in the purchase itself, the last thing we want is a frictionless path to purchase. These are what we call high consideration purchases.

We want to take our time, feeling that we’ve considered all the options. One click ordering scares the bejeezus out of us.

Let’s go back to the Amazon example. Today, Amazon is the default search engine of choice for product searches, outpacing Google by a margin rapidly approaching double digits. But this is not really an apples to apples comparison. We have to factor in the deliberate intention of the user. We go to Amazon to buy, so a faster path to purchase is appropriate. We go to Google to consider. And for reasons I’ll get into soon, we would be less accepting of a “buy” button there.

The buying paths we would typically take in a social platform like Instagram are probably not that high risk, so a fast path to purchase might be fine. But there’s another factor that we need to consider when shortening the path to purchase – or buiding a path in the first place – in what has traditionally been considered a discovery platform. Let’s call it a mixing of motives.

Google has been dancing around a shorter path to purchase for years now. As Sullivan said in her article, “Search engines have strength in what’s known as discovery shopping, but completing the transaction has never been a strong point — mainly because brands decline to give up the ownership of the data.”

Data ownership is one thing, but even if the data were available, including a “buy now” button in search results can also lead to user trust issues. For many purchases, we need to feel that our discovery engine has no financial motive in the ordering of their search results. This – of course – is a fallacy we build in our own minds. There is always a financial motive in the ordering of search results. But as long as it’s not overt, we can trick ourselves into living with it. A “buy now” button makes it overt.

This problem of mixed motives is not just a problem of user perception. It also can lead publishers down a path that leaves objectivity behind and pursues higher profits ahead. One example is TripAdvisor. Some years ago, they made the corporate decision to parlay their strong position as a travel experience discovery platform into an instant booking platform. In the beginning, they separated this booking experience onto its own platform under the brand Viator. Today, the booking experience has been folded into the main TripAdvisor results and – more disturbingly – is now the default search order. Every result at the top of the page has a “Book Now” button.

Speaking as a sample of one, I trust TripAdvisor a lot less than I used to.

 

Is Google Politically Biased?

As a company, the answer is almost assuredly yes.

But are the search results biased? That’s a much more nuanced question.

Sundar Pinchai testifying before congress

In trying to answer that question last week, Google CEO Sundar Pinchai tried to explain how Google’s algorithm works to Congress’s House Judiciary Committee (which kind of like God explaining how the universe works to my sock, but I digress). One of the catalysts for this latest appearance of a tech was another one of President Trump’s ranting tweets that intimated something was rotten in the Valley of the Silicon:

Google search results for ‘Trump News’ shows only the viewing/reporting of Fake New Media. In other words, they have it RIGGED, for me & others, so that almost all stories & news is BAD. Fake CNN is prominent. Republican/Conservative & Fair Media is shut out. Illegal? 96% of … results on ‘Trump News’ are from National Left-Wing Media, very dangerous. Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see. This is a very serious situation-will be addressed!”

Granted, this tweet is non-factual, devoid of any type of evidence and verging on frothing at the mouth. As just one example, let’s take the 96% number that Trump quotes in the above tweet. That came from a very unscientific straw poll that was done by one reporter on a far right-leaning site called PJ Media. In effect, Trump did exactly what he accuses of Google doing – he cherry-picked his source and called it a fact.

But what Trump has inadvertently put his finger on is the uneasy balance that Google tries to maintain as both a search engine and a publisher. And that’s where the question becomes cloudy. It’s a moral precipice that may be clear in the minds of Google engineers and executives, but it’s far from that in ours.

Google has gone on the record as ensuring their algorithm is apolitical. But based on a recent interview with Google News head Richard Gingras, there is some wiggle room in that assertion. Gingras stated,

“With Google Search, Google News, our platform is the open web itself. We’re not arbiters of truth. We’re not trying to determine what’s good information and what’s not. When I look at Google Search, for instance, our objective – people come to us for answers, and we’re very good at giving them answers. But with many questions, particularly in the area of news and public policy, there is not one single answer. So we see our role as [to] give our users, citizens, the tools and information they need – in an assiduously apolitical fashion – to develop their own critical thinking and hopefully form a more informed opinion.”

But –  in the same interview – he says,

“What we will always do is bias the efforts as best we can toward authoritative content – particularly in the context of breaking news events, because major crises do tend to attract the bad actors.”

So Google does boost news sites that it feels are reputable and it’s these sites – like CNN –  that typically dominate in the results. Do reputable news sources tend to lean left? Probably. But that isn’t Google’s fault. That’s the nature of Open Web. If you use that as your platform, you build in any inherent biases. And the minute you further filter on top of that platform, you leave yourself open to accusations of editorializing.

There is another piece to this puzzle. The fact is that searches on Google are biased, but that bias is entirely intentional. The bias in this case is yours. Search results have been personalized so that they’re more relevant to you. Things like your location, your past search history, the way you structure your query and a number of other signals will be used by Google to filter the results you’re shown. There is no liberal conspiracy. It’s just the way that the search algorithm works. In this way, Google is prone to the same type of filter-bubble problem that Facebook has.  In another interview with Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, he touches on this:

“I was struck by the idea that whereas those arguments seem to work as late as only just a few years ago, they’re increasingly ringing hollow, not just on the side of the conservatives, but also on the liberal side of things as well. And so what I think we’re seeing here is really this view becoming mainstream that these platforms are in fact not neutral, and that they are not providing some objective truth.”

The biggest challenge here lies not in the reality of what Google is or how it works, but in what our perception of Google is. We will never know the inner workings of the Google algorithm, but we do trust in what Google shows us. A lot. In our own research some years ago, we saw a significant lift in consumer trust when brands showed up on top of search results. And this effect was replicated in a recent study that looked at Google’s impact on political beliefs. This study found that voter preferences can shift by as much as 20% due to biased search rankings – and that effect can be even higher in some demographic groups.

If you are the number one channel for information, if you manipulate the ranking of the information in any way and if you wield the power to change a significant percentage of minds based on that ranking – guess what? You are the arbitrator of truth. Like it or not.

Deconstructing the Google/Facebook Duopoly

We’ve all heard about it. The Google/Facebook Duopoly. This was what I was going to write about last week before I got sidetracked. I’m back on track now (or, at least, somewhat back on track). So let’s start by understanding what a duopoly is…

…a situation in which two suppliers dominate the market for a commodity or service.

And this, from Wikipedia…

… In practice, the term is also used where two firms have dominant control over a market.

So, to have a duopoly, you need two things: domination and control. First, let’s deal with the domination question. In 2017, Google and Facebook together took a very healthy 59% slice of all digital ad revenues in the US. Google captured 38.6% of that, with Facebook capturing 20%. That certainly seems dominant. But if online marketing is the market, that is a very large basket with a lot of different items thrown in. So, let’s do a broad categorization to help deconstruct this a bit.  Typically, when I try to understand marketing, I like to start with humans – or more specifically – what that lump of grey matter we call a brain is doing. And if we’re talking about marketing, we’re talking about attention – how our brains are engaging with our environment. That is an interesting way to divide up the market we’re talking about, because it neatly bisects the attentional market, with Google on one side and Facebook on the other.

Google dominates the top down, intent driven, attentionally focused market. If you’re part of this market, you have something in mind and you’re trying to find it. If we use search as a proxy for this attentional state (which is the best proxy I can think of) we see just how dominate Google is. It owns this market to a huge degree. According to Statista, Google has about 87% of the total worldwide search market in April of 2018. The key metric here is success. Google needs to be the best way to fulfill those searches. And if market share is any indication, it is.

Facebook apparently dominates the bottom up awareness market. These are the people killing time online and they are not actively looking with commercial intent. This is more of an awareness play where attention has to be diverted to an advertising message. Therefore, time spent becomes the key factor. You need to be in front of the right eyeballs, and so you need a lot of eyeballs and a way to target to the right ones.

Here is where things get interesting. If we look at share of consumer time, Google dominates here. But there is a huge caveat, which I’ll get to in a second. According to a report this spring by Pivotal Research, Google owns just under 28% of all the time we spend consuming digital content. Facebook has just over a 16% share of this market. So why do we have a duopoly and not a monopoly? It’s because of that caveat – a whopping slice of Google’s “time spent” dominance comes from YouTube. And YouTube has an entirely different attentional profile – one that’s much harder to present advertising against. When you’re watching a video on YouTube, your attention is “locked” on the video. Disrupting that attention erodes the user experience. So Google has had a tough time monetizing YouTube.

According to Seeking Alpha, Google’s search ad business will account for 68% of their total revenue of $77 billion this year. That’s over 52 billion dollars that is in that “top-down” attentionally focused bucket. YouTube, which is very much in the “bottom-up” disruptive bucket, accounts for $12 Billion in advertising revenues. Certainly nothing to sneeze at, but not on the same scale as Google’s search business. Facebook’s revenue, at about $36 B, is also generated by this same “bottom up” market, but they have a different attentional profile. The Facebook user is not as “locked in” as they are on YouTube. With the right targeting tools, something that Facebook has excelled at, you have a decent chance of gaining their attention long enough to notice your ad.

Domination

If we look at the second part of the definition of a duopoly – that of control – we see some potential chinks in the armor of both Google and Facebook. Typically, market control was in the form of physical constraints against the competition. But in this new type of market, the control can only be in the minds of the users. The barriers to competitive entry are all defined in mental terms.

In  Google’s case, they have a single line of defense: they have to be an unbreakable habit. Habits are mental scripts that depend on two things – obvious environmental cues that trigger habitual behavior and acceptable outcomes once the script completes. So, to maintain their habit, Google has to ensure that whatever environment you might be in when searching online for something, Google is just a click or two away. Additionally, they have to meet a certain threshold of success. Habits are tough to break, but there are two areas of vulnerability in Google’s dominance.

Facebook is a little different. They need to be addictive. This is a habit taken to the extreme. Addictions depend on pushing certain reward buttons in the brain that lead to an unhealthy behavioral script which become obsessive. The more addicted you are to Facebook and its properties, the more successful they will be in their dominance of the market. You can see the inherent contradiction here. Despite Facebook’s protests to the contrary, with their current revenue model they can only succeed at the expense of our mental health.

I find these things troubling. When you have two for-profit organizations fighting to dominate a market that is defined in our own minds, you have the potential for a lot of unhealthy corporate decisions.

 

What a Shock: Marketers Don’t Like SEO!

So, apparently marketers don’t like SEO because they don’t understand SEO. That’s the upshot of a new report just out where SEO ranked at the tail end of digital initiatives.

I call bullshit on that. It’s not that marketers don’t understand SEO. It’s that they don’t like it.

I did my first SEO work in 1996. That’s two years before there was a Google. And marketers didn’t understand SEO then. Or so they said. They’ve kept the message consistent for the last 22 years. “We don’t get SEO.”

Look, SEO is not rocket science. It’s where searcher intent intersects with content. Know what people are looking for and give it to them. It’s that simple. This is not about SEO being hard to understand. It’s about SEO being hard to do. The last time I climbed on this particular soapbox was 4 years ago and nothing has changed. SEO is still hard, maybe harder than it ever has been. That’s what marketers don’t like. Well, that and many other things. SEO is had to SEO is hard to control. It’s hard to predict. It’s hard to measure. And that makes it almost impossible to rely on. All of those things are anathema to a marketer.

But here’s the biggest thing that’s going against SEO’s popularity with marketers. It’s not very exciting. It’s arduous. It about as sexy as weeding the garden. That’s probably why social media tops the list.

So why even both about search? For two reasons. There is no better crystallization of prospect intent – short of converting on your own website – than an online search. The planets are aligned, the heavens have opened with a hallelujah chorus, the Holy Grail has fallen into your lap. I spent the better part of two decades researching search user behaviors. Trust me when I say this is as good as it gets. That’s reason one. Reason two is that somewhere between 75% and 85% of those prospects will click on an organic listing. When we’re talking about capturing a motivated prospect, this is no brainer stuff. Yet marketers are saying no thanks, we’ll take a pass on that – Thank you very much.

If online is important to your marketing, chances are extremely good that SEO is also important. I don’t care whether you like it or not. You have to do it. If you don’t want to, find someone who does.

That brings up another reason marketers hate SEO. It doesn’t really live in their domain. SEO, by its very nature, stretches across multiple domains. It has to be systemic across the entire organization. So, it’s not entirely the fault of the marketer that SEO is neglected. It tends to fall into a no-man’s land between departments. Marketers don’t push it because there are many other things they can do that they have complete control over. And if the marketers don’t push it, there is no one else that’s going to step forward. Executives, who may legitimately not understand SEO, think of it solely as a marketing exercise. Tech support hates SEO even more than marketers. And corporate compliance? Don’t get me started on corporate compliance! There is a reason why SEO has always been known as a red-headed stepchild.

As a past SEO-er, I wasn’t really surprised to see that SEO still gets no love from marketers. I’ve forced myself to eat broccoli my entire life. And it’s not because I don’t understand broccoli. It’s because I don’t like it. Somethings remain constant. But you know what else? I still choke it down. Because my mom was right – it’s good for you.

 

Why The Paradox of Choice Doesn’t Apply to Netflix

A recent article in Mediapost reported that Millennials – and Baby Boomers for that matter – prefer broad choice platforms like Netflix and YouTube to channels specifically targeted to their demo. A recent survey found that almost 40% of respondents aged 18 – 24 used Netflix most often to view video content.

Author Wayne Friedman mused on the possibility that targeted channels might be a thing of the past: “This isn’t the mid-1990s. Perhaps audience segmentation into different networks — or separately branded, over-the-top digital video platforms  — is an old thing.”

It is. It’s aimed at an old world that existed before search filters. It was a world where Barry Schwartz’s Paradox of Choice was a thing. That’s not true in a world where we can quickly filter our choices.

Humans in almost every circumstance prefer the promise of abundance to scarcity. It’s how we’re hardwired. The variable here is our level of confidence in our ability to sort through the options available to us. If we feel confident that we can heuristically limit our choices to the most relevant ones, we will always forage in a richer environment.

In his book, Schwartz used the famous jam experiment of Sheena Iyengar to show how choice can paralyze us. Iyengar’s research team set up a booth with samples of jam in a gourmet food market. They alternated between a display of 6 jams and one of 24 options. They found that in terms of actually selling jams, the smaller display outperformed the larger one by a factor of 10 to 1. The study “raised the hypothesis that the presence of choice might be appealing as a theory,” Dr. Iyengar later said, “but in reality, people might find more and more choice to actually be debilitating.”

Yes, and no. What isn’t commonly cited is that in the study 60% of shoppers were drawn to the larger display, while only 40% were hooked by the smaller one. Yes, fewer bought, but that probably came down to a question of being able to filter, not the attraction of the display itself. Also, other researchers (Scheibehenne, Griefeneder and Todd, 2010) have ran into problems trying to verify the findings of the original study. They found that “on the basis of the data, no sufficient conditions could be identified that would lead to a reliable occurrence of choice overload.”

We all have a subconscious “foraging algorithm” that we use to sort through the various options in our environment. One of the biggest factors in this algorithm is the “cost of searching” – how much effort do we need to expend to find the thing we’re looking for? In today’s world, that breaks down into two variables: “finding” and “filtering.” A platform that’s rich in choice – like Netflix – virtually eliminates the cost of “finding.” We are confident that a platform that offers a massive number of choices will have something we will find interesting. So now it comes to “filtering.” If we feel confident enough in the filtering tools available to us, we will go with the richest environment available to us.  The higher our degree of confidence in our ability to “filter”, the less we will want our options limited for us.

So, when does it make sense to limit the options available to an audience? There are some conditions identified by Scheibehenne at al where the Paradox of Choice is more likely to happen:

Unstructured Choices – The harder it is to categorize the choices available, the more likely it is that it will be more difficult to filter those options.

Choices that are Hard to Compare to Each Other – If you’re comparing apples and oranges, either figuratively or literally, the cognitive load required to compare choices increases the difficulty.

The Complexity of Choices – The more information we have to juggle when we’re making a choice, the greater the likelihood that our brains may become overtaxed in trying to make a choice.

Time Pressure when Making a Choice – If you hear the theme song of Jeopardy when you’re trying to make a choice you’re more likely to become frustrated when trying to sort through a plethora of options.

If you are in the business of presenting options to customers, remember that the Paradox of Choice is not a hard and fast rule. In fact, the opposite is probably true – the greater the perception of choice, the more attractive it will be to them. The secret is in providing your customers the ability to filter quickly and efficiently.

 

Advertising Meets its Slippery Slope

We’ve now reached the crux of the matter when it comes to the ad biz.

For a couple of centuries now, we’ve been refining the process of advertising. The goal has always been to get people to buy stuff. But right now, there is now a perfect storm of forces converging that requires some deep navel gazing on the part of us insiders.

It used to be that to get people to buy, all we had to do was inform. Pent up consumer demand created by expanding markets and new product introductions would take care of the rest. We just had to connect better the better mousetraps with the world, which would then duly beat the path to the respective door.  Advertising equaled awareness.

But sometime in the waning days of the consumer orgy that followed World War Two, we changed our mandate. Not content with simply informing, we decided to become influencers. We slipped under the surface of the brain, moving from providing information for rational consideration to priming subconscious needs. We started messing with the wiring of our market’s emotional motivations.  We became persuaders.

Persuasion is like a mental iceberg – 90% of the bulk lies below the surface. Rationalization is typically the hastily added layer of ad hoc logic that happens after the decision is already made.  This is true to varying degrees for almost any consumer category you can think including – unfortunately – our political choices.

This is why, a few columns ago – I said Facebook’s current model is unsustainable. It is based on advertising, and I think advertising may have become unsustainable. The truth is, advertisers have gotten so good at persuading us to do things that we are beginning to revolt. It’s getting just too creepy.

To understand how we got here, let’s break down persuasion. It requires the persuader to shift the beliefs of the persuadee. The bigger the shift required, the tougher the job of persuasion.  We tend to build irrational (aka emotional) bulwarks around our beliefs to preserve them. For this reason, it’s tremendously beneficial to the persuader to understand the belief structure of their target. If they can do this, they can focus on those whose belief structure is most conducive to the shift required.

When it comes to advertisers, the needle on our creative powers of persuasion hasn’t really moved that much in the last half century. There were very persuasive ads created in the 1960’s and there are still great ads being created. The disruption that has moved our industry to the brink of the slippery slope has all happened on the targeting end.

The world we used to live in was a bunch of walled and mostly unconnected physical gardens. Within each, we would have relevant beliefs but they would remain essentially private. You could probably predict with reasonable accuracy the religious beliefs of the members of a local church. But that wouldn’t help you if you were wondering whether the congregation leaned towards Ford or Chevy.  Our beliefs lived inside us, typically unspoken and unmonitored.

That all changed when we created digital mirrors of ourselves through Facebook, Twitter, Google and all the other usual suspects. John Battelle, author of The Search,  once called Google the Database of Intentions. It is certainly that. But our intent also provides an insight into our beliefs. And when it comes to Facebook, we literally map out our entire previously private belief structure for the world to see. That is why Big Data is so potentially invasive. We are opening ourselves up to subconscious manipulation of our beliefs by anyone with the right budget. We are kidding ourselves if we believe ourselves immune to the potential abuse that comes with that. Like I said, 90% of our beliefs are submerged in our subconscious.

We are just beginning to realize how effective the new tools of persuasion are. And as we do so, we are beginning to feel that this is all very unfair. No one likes being manipulated; even if they have willing laid the groundwork for that manipulation. Our sense of retroactive justice kicks in. We post rationalize and point fingers. We blame Facebook, or the government, or some hackers in Russia. But these are all just participants in a new eco-system that we have helped build. The problem is not the players. The problem is the system.

It’s taken a long time, but advertising might just have gotten to the point where it works too well.