In Defense of SEO

Last week, my social media feeds blew up with a plethora (yes – a plethora!) of indignant posts about a new essay that had just dropped on The Verge.

It was penned by Amanda Chicago Lewis and it was entitled: “The People that Ruined the Internet”

The reason for the indignation? Those “people” included myself, and many of my past colleagues. The essay was an investigation of the industry I used to be in. One might even call me one of the original pioneers of said industry. The intro was:

“As the public begins to believe Google isn’t as useful anymore, what happens to the cottage industry of search engine optimization experts who struck content oil and smeared it all over the web? Well, they find a new way to get rich and keep the party going.”

Am I going to refute the observations of Ms. Lewis?

No, because they are not lies. They are observations. And observations happen through the lens the observer uses to observe. What struck me is the lens Lewis chose to see my former industry through, and the power of a lens in media.

Lewis is an investigative journalist. She writes exposes. If you look at the collection of her articles, you don’t have to scroll very far before you have seen the words “boondoggle”, “hustler”, “lies”, “whitewashing”, and “hush money” pop up in her titles. Her journalistic style veers heavily towards being a “hammer”, which makes all that lie in her path “nails.”

This was certainly true for the SEO article. She targeted many of the more colorful characters still in the SEO biz and painted them with the same acerbic, snarky brush. Ironically, she lampoons outsized personalities without once considering that all of this is filtered through her own personality. I have never met Lewis, but I suspect she’s no shrinking violet. In the article, she admits a grudging admiration for the hustlers and “pirates” she interviewed.

Was that edginess part of the SEO industry? Absolutely. But contrary to the picture painted by Lewis, I don’t believe that defined the industry. And I certainly don’t believe we ruined the internet. Google organic search results are better than they were 10 years ago. We all have a better understanding of how people actually search and a good part of that research was done by those in the SEO industry (myself included). The examples of bad SEO that Lewis uses are at least 2 decades out of date.

I think Lewis, and perhaps others of her generation, suffer from “rosy retrospection” – a cognitive bias that automatically assumes things were better yesterday. I have been searching for the better part of 3 decades and – as a sample of one – I don’t agree. I can also say with some empirical backing that the search experience is quantitatively better than it was when we did our first eye tracking study 20 years ago. A repeat study done 10 years ago showed time to first click had decreased and satisfaction with that click had increased. I’m fairly certain that a similar study would show that the search experience is better today than it was a decade ago. If this is a “search optimized hellhole”, it’s much less hellish than it was back in the “good old days” of search.

One of the reasons for that improvement is that millions of websites have been optimized by SEOs (a label which, by the way Amanda, has absolutely nothing to do with wanting to be mistaken for a CEO) to unlock unindexable content, fix broken code, improve usability, tighten up and categorize content and generally make the Internet a less shitty and confusing place. Not such an ignoble pursuit for “a bunch of megalomaniacal jerks (who) were degrading our collective sense of reality because they wanted to buy Lamborghinis and prove they could vanquish the almighty algorithm.”

Amanda Chigaco Lewis did interview those who sat astride the world of search providers and the world of SEO: Danny Sullivan (“angry and defensive” – according to Lewis), Barry Schwartz (“an unbelievably fast talker”), Duane Forrester (a “consummate schmoozer”) and Matt Cutts (an “SEO celebrity”). Each tried to refute her take that things are “broken” and the SEOs are to blame, but she brushed those aside, intent on caricaturing them as a cast of characters from a carnival side show.  Out of the entire scathing diatribe, one scant paragraph grudgingly acknowledges that maybe not all SEO is bad. That said, Lewis immediately spins around and says that it doesn’t matter, because the bad completely negates the good.

Obviously, I don’t agree with Lewis’s take on the SEO industry. Maybe it’s because I spent the better part of 20 years in the industry and know it at a level Lewis never could. But what irritates me the most is that she made no attempt to go beyond taking the quick and easy shots. She had picked her lens through which she viewed SEO before the very first interview and everything was colored by that lens. Was her take untrue? Not exactly. But it was unfair. And that’s why reporters like Lewis have degraded journalism to the point where it’s just clickbait, with a few more words thrown in.

Lewis gleefully stereotypes SEOs as “content goblin(s) willing to eschew rules, morals, and good taste in exchange for eyeballs and mountains of cash.” That’s simply not true. It’s no more true than saying all investigative journalists are “screeching acid-tongued harpies who are hopelessly biased and cover their topics with all the subtlety of a flame-thrower.”

P.S.  I did notice the article was optimized for search, with keywords prominently shown in the URL. Does that make the Verge and Lewis SEOs?

The Seedy, Seedy World of Keto Gummies

OK, admit it. I play games on my phone.

Also, I’m cheap, so I play the free, ad-supported versions.

You might call this a brain-dead waste of time, but I prefer to think of it as diligent and brave investigative journalism.  The time I spend playing Bricks Ball Crusher or Toy Blast is, in actuality, my research into the dark recesses of advertising on behalf of you, the more cerebral and discerning readers of this blog. I bravely sacrifice my own self-esteem so that I might tread the paths of questionable commerce and save you the trip.

You see, it was because of my game playing that I was introduced to the seediest of seedy slums in the ad world, the underbelly known as the in-game ad. One ad, in particular, reached new levels of low.

If you haven’t heard of the Keto Gummies Scam, allow me to share my experience.

This ad hawked miracle gummies that “burn the fat off you” with no dieting or exercising. Several before and after photos show the results of these amazing little miracle drops of gelatin. They had an impressive supporting cast. The stars of the TV pitchfest “Shark Tank” had invested in them. Both Rebel Wilson and Adele had used them to shed pounds. And then — the coup de grace — Oprah (yes, the Oprah!) endorsed them.

The Gummy Guys went right the top of the celebrity endorsement hierarchy when they targeted the big O.

As an ex ad guy, I couldn’t ignore this ad. It was like watching a malvertising train wreck. There was so much here that screamed of scam, I couldn’t believe it. The celebrity pics used were painfully obvious in their use of photoshopping. The claims were about as solid as a toilet paper Taj Mahal. The entire premise reeked of snake oil.

I admit, I was morbidly fascinated.

First, of all the celebrities in all the world, why would you misappropriate Oprah’s brand? She is famously protective of it. If you’re messing with Oprah, you’ve either got to be incredibly stupid or have some serious stones. So which was it?

I started digging.

First of all, this isn’t new. The Keto Gummy Scam has been around for at least a year. In addition to Oprah, they have also targeted Kevin Costner, Rhianna, Trisha Yearwood, Tom Selleck, Kelly Clarkson, Melissa McCarthy — even Wayne Gretzky.

Last Fall, Oprah shared a video on Instagram warning people that she had nothing to do with the gummies and asking people not to fall for the scam. Other celebrities have fallen suit and issued their own warnings.

Snopes.com has dug into the Keto Gummy Scam a couple of times.  One exposé focused on the false claims that the gummies were featured on “Shark Tank.” The first report focused just on the supposed Oprah Winfrey endorsement. That one was from a year ago. That means these fraudulent ads have been associated with Oprah for at least a year and legally, she has been unable to stop them.

To me, that rules out my first supposition. These people aren’t stupid.

This becomes apparent when you start trying to pick your way through the maze of misinformation they have built to support these ads. If you click on the ad you’re taken to a webpage that looks like it’s from a reliable news source. The one I found looked like it was Time’s website. There you’ll find a “one-on-one interview” with Oprah about how she launched a partnership with Weight Watchers to create the Max Science Keto gummies. According to the interview, she called the CEO of Weight Watchers and said ‘if you can’t create a product that helps people lose weight faster without diet and exercise, then I’m backing out of my investment and moving on.”

This is all complete bullshit. But it’s convincing bullshit.

It doesn’t stop there. Clickbait texts with outrageous claims, including the supposed death of Oprah, get clicks through to more bogus sites with more outrageous claims about gummies. While the sites mimic legitimate news organizations like Time, they reside on bogus domains such as genuinesmother.com and newsurvey22offer.com. Or, if you go to them through an in-app link, the URLs are cloaked and remain invisible.

If you turn to a search engine to do some due diligence, the scammers will be waiting for you. If you search for “keto gummies scam” the results page is stuffed with both sponsored and organic spam that appear to support the outrageous claims made in the ads. Paid content outlets like Outlook India have articles placed that offer reviews of the “best keto gummies,” fake reviews, and articles assuring potential victims that the gummies are not a scam but are a proven way to lose weight.

As the Snopes investigators found, it’s almost impossible to track these gummies to any company. Even if you get gummies shipped to you, there’s no return address or phone number. Orders came from a shadowy “Fulfillment Center” in places like Smyrna, Tennessee. Once they get your credit card, the unauthorized charges start.

Even the name of the product seems to be hard to nail down. The scammers seem to keep cycling through a roster of names.

This is, by every definition, predatory advertising. It is the worst example of what we as marketers do. But, like all predators, it can only exist because an ecosystem allows it to exist. It’s something we have to think about.

I certainly will. More on that soon.

Search and ChatGPT – You Still Can’t Get There From Here

I’m wrapping up my ChatGPTrilogy with a shout out to an old friend that will be familiar to many Mediaposters – Aaron Goldman. 13 years ago Aaron wrote a book called Everything I Know About Marketing I Learned from Google.  Just a few weeks ago, Aaron shared a post entitled “In a World of AI, is Everything I Know about Marketing (still) Learned from Google”. In it, he looked at the last chapter of the book, which he called Future-Proofing. Part of that chapter was based on a conversation Aaron and I had back in 2010 about what search might look like in the future.

Did we get it right? Well, remarkably, we got a lot more right than we got wrong, especially with the advent of Natural Language tools such as ChatGPT and virtual assistants like Siri.

We talked a lot about something I called “app-sistants”. I explained, “the idea of search as a destination is an idea whose days are numbered. The important thing won’t be search. It will be the platform and the apps that run on it. The next big thing will be the ability to seamlessly find just the right app for your intent and utilize it immediately.” In this context, “the information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

To be honest, this evolution in search has taken a lot longer than I thought back then, “Intent will be more fully supported from end to end. Right now, we have to keep our master ‘intent’ plan in place as we handle the individual tasks on the way to that intent.”

Searching for complex answers as it currently sits requires a lot of heavy lifting. In that discussion, I used the example of planning a trip.  “Imagine if there were an app that could keep my master intent in mind for the entire process. It would know what my end goal was, would be tailored to understand my personal preferences and would use search to go out and gather the required information. When we look at alignment of intent, [a shift from search to apps is] a really intriguing concept for marketers to consider.”

So, the big question is, do we have such a tool? Is it ChatGPT? I decided to give it a try and see. After feeding ChatGPT a couple of carefully crafted prompts about a trip I’d like to take to Eastern Europe someday, I decided the answer is no. We’re not quite there yet. But we’re closer.

After a couple of iterations, ChatGPT did a credible job of assembling a potential itinerary of a trip to Croatia and Slovenia. It even made me aware of some options I hadn’t run across in my previous research. But it left me hanging well short of the “app-ssistant” I was dreaming of in 2010. Essentially, I got a suggestion but all the detail work to put it into an actual trip still required me to do hundreds of searches in various places.

The problem with ChatGPT is that it gets stuck between the millions of functionality siloes – or “walled gardens” – that make up the Internet. Those “walled gardens” exist because they represent opportunities for monetization. In order for an app-ssistant to be able to multitask and make our lives easier, we need a virtual “commonage” that gets rid of some of these walls. And that’s probably the biggest reason we haven’t seen a truly useful iteration of the functionality I predicted more than a decade ago.

This conflict between capitalism and the concept of a commonage goes back at least to the Magna Carta. As England’s economy transitioned from feudalism to capitalism, enclosure saw the building of fences and the wiping out of lands held as a commonage. The actual landscape became a collection of walled gardens that the enforced property rights of each parcel and the future production value of those parcels.

This history, which played out over hundreds of years, was repeated and compressed into a few decades online. We went from the naïve idealism of a “free for all” internet in the early days to the balkanized patchwork of monetization siloes that currently make up the Web.

Right now, search engines are the closest thing we have to a commonage on the virtual landscape. Search engines like Google can pull data from within many gardens, but if we actually try to use the data, we won’t get far before we run into a wall.

To go back to the idea of trip planning, I might be able to see what it costs to fly to Rome or what the cost of accommodations in Venice is on a search engine, but I can’t book a flight or reserve a room. To do that, I have to visit an online booking site. If I’m on a search engine, I can manually navigate this transition fairly easily. But it would stop something like ChatGPT in its tracks.

When I talked to Aaron 13 years ago, I envisioned search becoming a platform that lived underneath apps which could provide more functionality to the user. But I also was skeptical about Google’s willingness to do this, as I stated in a later post here on Mediapost.  In that post, I thought that this might be an easier transition for Microsoft.

Whether it was prescience or just dumb luck, it is indeed Microsoft taking the first steps towards integrating search with ChatGPT, through its recent integration with Bing. Expedia (who also has Microsoft DNA in its genome) has also taken a shot at integrating ChatGPT in a natural language chat interface.

This flips my original forecast on its head. Rather than the data becoming common ground, it’s the chat interface that’s popping up everywhere. Rather than tearing down the walls that divide the online landscape, ChatGPT is being tacked up as window decoration on those walls.

I did try planning that same trip on both Bing and Expedia. Bing – alas – also left me well short of my imagined destination. Expedia – being a monetization site to begin with – got me a little closer, but it still didn’t seem that I could get to where I wanted to go.

I’m sorry to say search didn’t come nearly as far as I hoped it would 13 years ago. Even with ChatGPT thumbtacked onto the interface, we’re just not there yet.

(Feature Image: OpenAI Art generated from the prompt: “A Van Gogh painting of a chatbot on a visit to Croatia”)

The Dangerous Bits about ChatGPT

Last week, I shared how ChatGPT got a few things wrong when I asked it “who Gord Hotchkiss was.” I did this with my tongue at least partially implanted in cheek – but the response did show me a real potential danger here, coming from how we will interact with ChatGPT.

When things go wrong, we love to assign blame. And if ChatGPT gets things wrong, we will be quick to point the finger at it. But let’s remember, ChatGPT is a tool, and the fault very seldom lies with the tool. The fault usually lies with the person using the tool.

First of all, let’s look at why ChatGPT put together a bio for myself that was somewhat less than accurate (although it was very flattering to yours truly).

When AI Hallucinates

I have found a few articles that calls ChatGPT out for lying. But lying is an intentional act, and – as far as I know – ChatGPT has no intention of deliberately leading us astray. Based on how ChatGPT pulls together information and synthesizes it into a natural language response, it actually thought that “Gord Hotchkiss” did the things it told me I had done.

You could more accurately say ChatGPT is hallucinating – giving a false picture based on what information it retrieves and then tries to connect into a narrative. It’s a flaw that will undoubtedly get better with time.

The problem comes with how ChatGPT handles its dataset and determines relevance between items in that dataset. In this thorough examination by Machine Learning expert Devansh Devansh, ChatGPT is compared to predictive autocomplete on your phone. Sometimes, through a glitch in the AI, it can take a weird direction.

When this happens on your phone, it’s word by word and you can easily spot where things are going off the rail.  With ChatGPT, an initial error that might be small at first continues to propagate until the AI has spun complete bullshit and packaged it as truth. This is how it fabricated the Think Tank of Human Values in Business, a completely fictional organization, and inserted it into my CV in a very convincing way.

There are many, many others who know much more about AI and Natural Language Processing that I do, so I’m going to recognize my limits and leave it there. Let’s just say that ChatGPT is prone to sharing it’s AI hallucinations in a very convincing way.

Users of ChatGPT Won’t Admit Its Limitations

I know and you know that marketers are salivating over the possibility of AI producing content at scale for automated marketing campaigns. There is a frenzy of positively giddy accounts about how ChatGPT will “revolutionize Content Creation and Analysis” – including this admittedly tongue in cheek one co-authored by MediaPost Editor in Chief Joe Mandese and – of course – ChatGPT.

So what happens when ChatGPT starts to hallucinate in the middle of massive social media campaign that is totally on autopilot? Who will be the ghost in the machine that will say “Whoa there, let’s just take a sec to make sure we’re not spinning out fictitious and potentially dangerous content?”

No one. Marketers are only human, and humans will always look for the path of least resistance. We work to eliminate friction, not add it. If we can automate marketing, we will. And we will shift the onus of verifying information to the consumer of that information.

Don’t tell me we won’t, because we have in the past and we will in the future.

We Believe What We’re Told

We might like to believe we’re Cartesian, but when it comes to consuming information, we’re actually Spinozian

Let me explain. French philosopher René Descartes and Dutch philosopher Baruch Spinoza had two different views of how we determine if something is true.

Descartes believed that understanding and believing were two different processes. According to Descartes, when we get new information, we first analyze it and then decide if we believe it or not. This is the rational assessment that publishers and marketers always insist that we humans do and it’s their fallback position when they’re accused of spreading misinformation.

But Baruch Spinoza believed that understanding and belief happened at the same time. We start from a default position of believing information to be true without really analyzing it.

In 1993, Harvard Psychology Professor Daniel Gilbert decided to put the debate to the test (Gilbert, Tafarodi and Malone). He split a group of volunteers in half and gave both a text description detailing a real robbery. In the text there were true statements, in green, and false statements, in red. Some of the false statements made the crime appear to be more violent.

After reading the text, the study participants were supposed to decide on a fair sentence. But one of the groups got interrupted with distractions. The other group completed the exercise with no distractions. Gilbert and his researchers believed the distracted group would behave in a more typical way.

The distracted group gave out substantially harsher sentences than the other group. Because they were distracted, they forgot that green sentences were true and red ones were false. They believed everything they read (in fact, Gilbert’s paper was called “You Can’t Not Believe Everything You Read).”

Gilbert’s study showed that humans tend to believe first and that we actually have to “unbelieve” if something is eventually proven to us to be false. Once study even found the place in our brain where this happens – the Right Inferior Prefrontal Cortex. This suggests that “unbelieving” causes the brain to have to work harder than believing, which happens by default. 

This brings up a three-pronged dilemma when we consider ChatGPT: it will tend to hallucinate (at least for now), users of ChatGPT will disregard that flaw when there are significant benefits to doing so, and consumers of ChatGPT generated content will believe those hallucinations without rational consideration.

When Gilbert wrote his paper, he was still 3 decades away from this dilemma, but he wrapped up with a prescient debate:

“The Spinozan hypothesis suggests that we are not by nature, but we can be by artifice, skeptical consumers of information. If we allow this conceptualization of belief to replace our Cartesian folk psychology, then how shall we use it to structure our own society? Shall we pander to our initial gullibility and accept the social costs of prior restraint, realizing that some good ideas will inevitably be suppressed by the arbiters of right thinking? Or shall we deregulate the marketplace of thought and accept the costs that may accrue when people are allowed to encounter bad ideas? The answer is not an easy one, but history suggests that unless we make this decision ourselves, someone will gladly make it for us. “

Daniel Gilbert

What Gilbert couldn’t know at the time was that “someone” might actually be a “something.”

(Image:  Etienne Girardet on Unsplash)

The Biases of Artificial Intelligence: Our Devils are in the Data

I believe that – over time – technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place to live in. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backwards in the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way. I accept that along with my optimism

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the saviour of democracy. But, even then, the optimism was premature – even misplaced.

In his book The Net Delusion: The Dark Side of Internet Freedom, journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just 5 years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out. As Morozov reminds us, technology – in and of itself – is not an answer. It is a tool. Its impact will be determined by those that built the tool and, more importantly, those that use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

To assess possible gender bias in search results, the researchers examined whether words that should refer with equal probability to a man or a woman, such as “person,” “student,” or “human,” are more often assumed to be a man. They conducted Google image searches for “person” across 37 countries. The results showed that the proportion of male images yielded from these searches was higher in nations with greater gender inequality, revealing that algorithmic gender bias tracks with societal gender inequality.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote:

“I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.”’Data sets for detecting skin cancer are missing samples of darker skin types. “

Deborah Raji, MIT Technology Review

These biases in search highlight the biases in a culture. Search brings back a representation of content that has been published online; a reflection of a society’s perceptions. In these cases, the devil is in the data. The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—“a majority of the industry.” They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender balanced nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

I Was So Wrong in 1996…

It’s that time of year – the time when we sprain our neck trying to look backwards and forwards at the same time. Your email inbox, like mine, is probably crammed with 2021 recaps and 2022 predictions.

I’ve given up on predictions. I have a horrible track record. In just a few seconds, I’ll tell you how horrible. But here, at the beginning of 2022, I will look back. And I will substantially overshoot “a year in review” by going back all the way til 1996, 26 years ago. Let me tell you why I’m in the mood for some reminiscing.

In amongst the afore-mentioned “look back” and “look forward” items I saw recently there was something else that hit my radar; a number of companies looking for SEO directors. After being out of the industry for almost 10 years, I was mildly surprised that SEO still seemed to be a rock solid career choice. And that brings me both to my story about 1996 and what was probably my worst prediction about the future of digital marketing.

It was in late 1996 that I first started thinking about optimizing sites for the search engines and directories of the time: Infoseek, Yahoo, Excite, Lycos, Altavista, Looksmart and Hotbot. Early in 1997 I discovered Danny Sullivan’s Webmaster’s Guide to Search Engines. It was revelatory. After much trial and error, I was reasonably certain I could get sites ranking for pretty much any term. We had our handful of local clients ranking on Page One of those sites for terms like “boats,” “hotels”, “motels”, “men’s shirts” and “Ford Mustang”. It was the Wild West. Small and nimble web starts ups were routinely kicking Fortune 500 ass in the digital frontier.   

As a local agency that had played around with web design while doing traditional marketing, I was intrigued by this opportunity. Somewhere near the end of 1997 I did an internal manifesto where I speculated on the future of this “Internet” thing and what it might mean for our tiny agency (I had just brought on board my eventual partner, Bill Barnes, and we had one other full-time employee). I wish I could find that original document, but I remember saying something to the effect of, “This search engine opportunity will probably only last a year or two until the engines crack down and close the loopholes.” Given that, we decided to go for broke and seize that opportunity.

In 1998 we registered the domain www.searchengineposition.com. This was a big step. If you could get your main keywords in your domain name, it virtually guaranteed you link juice. At that time, “Search engine optimization” hadn’t emerged as the industry label. Search engine positioning was the more common term. We couldn’t get www.searchenginepositioning.com because domain names were limited by the number of characters you could use.

We had our domain and soon we had a site. We needed all the help we could get, because according to my prediction, we only had until 2000 or so to make as much as we could from this whole “search thing.” The rest, as they say, was history. It just wasn’t the history I had predicted.

To be fair, I wasn’t the only one making shitty predictions at the time. In 1995, 3Com co-founder Robert Metcalfe (also the co-inventor of Ethernet) said in a column in Infoworld:

“Almost all of the many predictions now being made about 1996 hinge on the Internet’s continuing exponential growth. But I predict the Internet, which only just recently got this section here in InfoWorld, will soon go spectacularly supernova and in 1996 catastrophically collapse.”

And in 1998, Nobel prize winning economist Paul Krugman said,

“The growth of the Internet will slow drastically, as the flaw in ‘Metcalfe’s law’ becomes apparent: most people have nothing to say to each other! By 2005, it will become clear that the Internet’s impact on the economy has been no greater than the fax machine’s”

Both of those people were way smarter than I was, so if I was clueless about the future, at least I was in good company.

As we now know, SEO would be fine, thank you very much. In 2004, some 6 years later, in my very first post for MediaPost, I wrote:

“I believe years from now that…2004 … will be a milestone in the (Search) industry. I think it will mark the beginning of a year that will dramatically alter the nature of search marketing.”

That prediction, as it turned out, was a little more accurate. In 2004, Google’s AdWords program really hit its stride, doubling revenue from 1.5 billion the previous year to $3 billion and starting its hockey stick climb up to its current level, just south of $150 billion (in 2020).

The reason search – and organic search optimization – never fizzled out was that it was a fundamental connection between user intent and the ever-expanding ocean of available content. Search Engine Optimization turned out to be a much better label for the industry than Search Engine Positioning, despite my unfortunate choice of domain names. The later was really an attempt to game the algorithms. The former was making sure content was findable and indexable. Hindsight has shown that it was a much more sustainable approach.

I ended that first post talking about the search industry of 2004 by saying,

“And to think, one day I’ll be able to say I was there.”

I guess today is that day.

Whatever Happened to the Google of 2001?

Having lived through it, I can say that the decade from 2000 to 2010 was an exceptional time in corporate history. I was reminded of this as I was reading media critic and journalist Ken Auletta’s book, “Googled, The End of the World as We Know It.” Auletta, along with many others, sensed a seismic disruption in the way media worked. A ton of books came out on this topic in the same time frame, and Google was the company most often singled out as the cause of the disruption.

Auletta’s book was published in 2009, near the end of this decade, and it’s interesting reading it in light of the decade plus that has passed since. There was a sort of breathless urgency in the telling of the story, a sense that this was ground zero of a shift that would be historic in scope. The very choice of Auletta’s title reinforces this: “The End of the World as We Know It.”

So, with 10 years plus of hindsight, was he right? Did the world we knew end?

Well, yes. And Google certainly contributed to this. But it probably didn’t change in quite the way Auletta hinted at. If anything, Facebook ended up having a more dramatic impact on how we think of media, but not in a good way.

At the time, we all watched Google take its first steps as a corporation with a mixture of incredulous awe and not a small amount of schadenfreude. Larry Page and Sergey Brin were determined to do it their own way.

We in the search marketing industry had front row seats to this. We attended social mixers on the Google campus. We rubbed elbows at industry events with Page, Brin, Eric Schmidt, Marissa Mayer, Matt Cutts, Tim Armstrong, Craig Silverstein, Sheryl Sandberg and many others profiled in the book. What they were trying to do seemed a little insane, but we all hoped it would work out.

We wanted a disruptive and successful company to not be evil. We welcomed its determination — even if it seemed naïve — to completely upend the worlds of media and advertising. We even admired Google’s total disregard for marketing as a corporate priority.

But there was no small amount of hubris at the Googleplex — and for this reason, we also hedged our hopeful bets with just enough cynicism to be able to say “we told you so” if it all came crashing down.

In that decade, everything seemed so audacious and brashly hopeful. It seemed like ideological optimism might — just might — rewrite the corporate rulebook. If a revolution did take place, we wanted to be close enough to golf clap the revolutionaries onward without getting directly in the line of fire ourselves.

Of course, we know now that what took place wasn’t nearly that dramatic. Google became a business: a very successful business with shareholders, a grown-up CEO and a board of directors, but still a business not all that dissimilar to other Fortune 100 examples. Yes, Google did change the world, but the world also changed Google. What we got was more evolution than revolution.

The optimism of 2000 to 2010 would be ground down in the next 10 years by the same forces that have been driving corporate America for the past 200 years: the need to expand markets, maximize profits and keep shareholders happy. The brash ideologies of founders would eventually morph to accommodate ad-supported revenue models.

As we now know, the world was changed by the introduction of ways to make advertising even more pervasively influential and potentially harmful. The technological promise of 20 years ago has been subverted to screw with the very fabric of our culture.

I didn’t see that coming back in 2001. I probably should have known better.

How We Forage for the News We Want

Reuters Institute out of the UK just released a comprehensive study looking at how people around the world are finding their news. There is a lot here, so I’ll break it into pieces over a few columns and look at the most interesting aspects. Today, I’ll look at the 50,000-foot view, which can best be summarized as a dysfunctional relationship between our news sources and ourselves. And like most dysfunctional relationships, the culprit here is a lack of trust.

Before we dive in, we should spend some time looking at how the way we access news has changed over the last several years.

Over my lifetime, we have trended in two general directions – less cognitively demanding news channels and less destination specific news sources. The most obvious shift has been away from print. According to Journalism.org and the Pew Research Center, circulation of U.S. Daily newspapers peaked around 1990, at about 62 and a half million. That’s one subscription for every 4 people in the country at that time.

In 2018, it was projected that circulation had dropped more than 50%, to less than 30 million. That would have been one subscription for every 10 people. We were no longer reading our news in a non-digital format. And that may have significant impact on our understanding of the news. I’ll return to this in another column, but for now, let’s just understand that our brain operates in a significantly different way when it’s reading rather than watching or listening.

Up the end of the last century, we generally trusted news destinations. Whether it be a daily newspaper like the New York Times, a news magazine like Time or a nightly newscast such as any of the network news shows, each was a destination that offered one thing above all others – the news. And whether you agreed with them or not, each had an editorial process that governed what news was shared. We had a loyalty to our chosen news destinations that was built on trust.

Over the past two decades, this trust has broken down due to one primary factor – our continuing use of social media. And that has dramatically shifted how we get our news.

In the US, three out of every four people use online sources to get their news. One in two use social media.  Those aged 18 to 24 are more than twice as likely to rely on social media. In the UK, under-35s get more of their news from Social Media than any other source.

Also, influencers have become a source of news, particularly amongst young people. In the US, a quarter of those 18 to 24 used Instagram as a source of news about COVID.

This means that most times, we’re getting our news through a social media lens. Let’s set aside for a moment the filtering and information veracity problems that introduces. Let’s just talk about intent for a moment.

I have talked extensively in the past about information foraging when it comes to search. When information is “patchy” and spread diversely, the brain has to make a quickly calculated guess about which patch it’s most likely to find the information in it’s looking for. With Information Foraging, the intent we have frames everything that comes after.

In today’s digital world, information sources have disaggregated into profoundly patchy environments. We still go to news-first destinations like CNN or Fox News but we also get much of our information about the world through our social media feeds. What was interesting about the Reuters report was that it was started before the COVID pandemic, but the second part of the study was conducted during COVID. And it highlights a fascinating truth about our relationship with the news when it comes to trust.

The study shows that the majority of us don’t trust the news we get through social media but most times, we’re okay with that. Less than 40% of people trust the news in general, and even when we pick a source, less than half of us trust that particular channel. Only 22% indicated they trust the news they see in social media. Yet half of us admit we use social media to get our news. The younger we are, the more reliant we are on social media for news. The fastest growing sources for news amongst all age groups – but especially those under 30 – are Instagram, SnapChat and WhatsApp.

Here’s another troubling fact that fell out of the study. Social platforms, especially Instagram and SnapChat, are dominated by influencers. That means that much of our news comes to us by way of a celebrity influencer reposting it on their feed. This is a far cry from the editorial review process that used to act as a gate keeper on our trusted news sources.

So why do we continue to use news sources we admit we don’t trust? I suspect it may have to do with something called the Meaning Maintenance Model. Proposed in 2006 by Heine, Proulx and Vohs, the model speculates that a primary driver for us is to maintain our beliefs in how the world works. This is related to the sense making loop (Klein, Moon and Hoffman) I’ve also talked about in the past. We make sense of the world by first starting with the existing frame of what we believe to be true. If what we’re experiencing is significantly different from what we believe, we will update our frame to align with the new evidence.

What the Meaning Maintenance Model suggests is that we will go to great lengths to avoid updating our frame. It’s much easier just to find supposed evidence that supports our current beliefs. So, if our intent is to get news that supports our existing world view, social media is the perfect source. It’s algorithmically filtered to match our current frame. Even if we believe the information is suspect, it still comforts us to have our beliefs confirmed. This works well for news about politics, societal concerns and other ideologically polarized topics.

We don’t like to admit this is the case. According to the Reuter’s study, 60% of us indicate we want news sources that are objective and not biased to any particular point of view. But this doesn’t jive with reality at all. As I wrote about in a previous column, almost all mainstream news sources in the US appear to have a significant bias to the right or left. If we’re talking about news that comes through social media channels, that bias is doubled down on. In practice, we are quite happy foraging from news sources that are biased, as long as that bias matches our own.

But then something like COVID comes along. Suddenly, we all have skin in the game in a very real and immediate way. Our information foraging intent changes and our minimum threshold for the reliability of our news sources goes way up. The Reuters study found that when it comes to sourcing COVID information, the most trusted sources are official sites of health and scientific organizations. The least trusted sources are random strangers, social media and messaging apps.

It requires some reading between the lines, but the Reuters study paints a troubling picture of the state of journalism and our relationship with it. Where we get our information directly impacts what we believe. And what we believe determines what we do.

These are high stakes in an all-in game of survival.

Saying So Long to SEMPO

Yesterday afternoon, while I was in line at the grocery store, my phone pinged. I was mentioned in a Twitter post. For me, that’s becoming a pretty uncommon experience. So I checked the post.  And that’s how I found out that SEMPO is no more.

The tweet was from Dana Todd, who was responding to a Search Engine Journalarticle by Roger Montti about the demise of SEMPO. For those of you who don’t know SEMPO: it was the Search Engine Marketing Professionals Organization.

It was a big part of my life during what seems like a lifetime ago. Todd was even more involved. Hence the tweet.

Increasingly I find my remaining half-life in digital consists of an infrequent series of “remember-when” throwbacks. This will be one of those.

Todd’s issue with the article was that much of the 17-year history of the organization was glossed over, as Montti chose to focus mainly on the controversies of the first year or two of its existence.

As Todd said, “You only dredged up the early stages of the organization, in its infancy as we struggled to gain respect and traction, and were beset by naysayers who looked for a reason we should fail. We didn’t fail.”

She then added, “There is far more to the SEMPO story, and far more notable people who put in blood sweat and tears to build not just the organization, but the entire industry.”

I was one of those people. But before that, I was also one of the early naysayers.

SEMPO started in 2003. I didn’t join until 2004. I spent at least part of that first year joining the chorus bitching about the organization. And then I realized that I could either bitch from the outside — or I could effect change from the inside.  

After joining, I quickly found myself on that same SEMPO board that that I’d been complaining about. In 2005, I became co-chair of the research committee. In 2006, I became the chair of SEMPO. I served in that role for two years and eventually stepped down from SEMPO at the same time I stepped away from the search industry.

Like Todd (who was the president of SEMPO for part of the time I was the chairman), I am proud of what we did, and extraordinarily proud of the team that made it happen. Many of the people I admired most in the industry served with me on that board.

Todd will always be one of my favorite search people. But I also had the privilege of serving with Jeff Pruit, Kevin Lee, Bill Hunt, Dave Fall, Christine Churchill and the person who got the SEMPO ball rolling, along with Todd: Barbara Coll. There were many, many others.

Now, SEMPO is being absorbed by the Digital Analytics Association, which, according to its announcement,  “is committed to helping former SEMPO members become fully integrated into DAA, and will be forming a special interest group (SIG) for search analytics.”

I’ve got to admit: That hurts. Being swallowed up, becoming nothing more than a special interest group, is a rather ignoble end for the association I gave so much to.

But as anyone who has raised a child can tell you, you know you’ve been successful when they no longer need you. And that’s how I choose to interpret this event. The search industry no longer needs SEMPO, at least as a stand-alone organization.

And if that’s the case, then SEMPO knocked it out of the park. Because that sure as hell wasn’t true back in 2003.

Search in 2003 was the Wild West. According to legend, there were white-hat SEOs and black-hat SEOs.

But truth be told, most of us wore hats that were some shade of grey.

The gunslingers of natural search (or organic SEO) were slowly and very reluctantly giving up their turf to the encroaching new merchants of paid search. Google Adwords had only been around for three years, but its launch introduced a whole new dynamic to the ecosystem. Google suddenly had to start a relationship with search marketers.

Before that, the only Google attempt made to reach out was thanks to a rogue mystery poster on SEO industry forums named “googleguy” (later suspected to be the search quality team lead Matt Cutts).  To call search an industry would be stretching the term to its breaking point.

The introduction of paid search was creating a two-sided marketplace, and that was forcing search to become more civilized.

The process of civilization is always difficult. It requires the establishment of trust and respect, two commodities that were in desperately short supply in search circa 2003.

SEMPO was the one organization that did the most to bring civilization to the search marketplace. It gave Google a more efficient global conduit to thousands of search marketers. And it gave those search marketers a voice that Google would actually pay some attention to.

But it was more than just starting a conversation. SEMPO challenged search marketers to think beyond their own interests. The organization laid the foundation for a more sustainable and equitable search ecosystem. If SEMPO accomplished anything to be proud of, it was in preventing the Tragedy of the Commons from killing search before it had a chance to establish itself as the fastest growing advertising marketplace in history.

Dana Todd wrapped up her extended Twitter post by writing, “I can say confidently Google wouldn’t be worth $1T without us. SEMPO — you mattered.”

Dana, just like in the old SEMPO days when we double-teamed a message, you said it better than I ever could.

And Google? You’re welcome.

Just in Time for Christmas: More Search Eye-Tracking

The good folks over at the Nielsen Norman Group have released a new search eye tracking report. The findings are quite similar to one my former company — Mediative — did a number of years ago (this link goes to a write-up about the study. Unfortunately, the link to the original study is broken. *Insert head smack here).

In the Nielsen Norman study, the two authors — Kate Moran and Cami Goray — looked at how a more visually rich and complex search results page would impact user interaction with the page. The authors of the report called the sum of participant interactions a “Pinball Pattern”: “Today, we find that people’s attention is distributed on the page and that they process results more nonlinearly than before. We observed so much bouncing between various elements across the page that we can safely define a new SERP-processing gaze pattern — the pinball pattern.”

While I covered this at some length when the original Mediative report came out in 2014 (in three separate columns: 1,2 & 3), there are some themes that bear repeating. Unfortunately, I found the study’s authors missed what I think are some of the more interesting implications. 

In the days of the “10 Blue Links” search results page, we used the same scanning strategy no matter what our intent was. In an environment where the format never changes, you can afford to rely on a stable and consistent strategy. 

In our first eye tracking study, published in 2004, this consistent strategy led to something we called the Golden Triangle. But those days are over.

Today, when every search result can look a little bit different, it comes as no surprise that every search “gaze plot” (the path the eyes take through the results page) will also be different. Let’s take a closer look at the reasons for this. 

SERP Eye Candy

In the Nielsen Norman study, the authors felt “visual weighting” was the main factor in creating the “Pinball Pattern”: “The visual weight of elements on the page drives people’s scanning patterns. Because these elements are distributed all over the page and because some SERPs have more such elements than others, people’s gaze patterns are not linear. The presence and position of visually compelling elements often affect the visibility of the organic results near them.”

While the visual impact of the page elements is certainly a factor, I think it’s only part of the answer. I believe a bigger, and more interesting, factor is how the searcher’s brain and its searching strategies have evolved in lockstep with a more visually complex results page. 

The Importance of Understanding Intent

The reason why we see so much variation in scan patterns is that there is also extensive variation in searchers’ intent. The exact same search query could be used by someone intent on finding an online or physical place to purchase a product, comparing prices on that product, looking to learn more about the technical specs of that product, looking for how-to videos on the use of the product, or looking for consumer reviews on that product.

It’s the same search, but with many different intents. And each of those intents will result in a different scanning pattern. 

Predetermined Page Visualizations

I really don’t believe we start each search page interaction with a blank slate, passively letting our eyes be dragged to the brightest, shiniest object on the page. I think that when we launch the search, our intent has already created an imagined template for the page we expect to see. 

We have all used search enough to be fairly accurate at predicting what the page elements might be: thumbnails of videos or images, a map showing relevant local results, perhaps a Knowledge Graph result in the lefthand column. 

Yes, the visual weighting of elements act as an anchor to draw the eye, but I believe the eye is using this anticipated template to efficiently parse the results page. 

I have previously referred to this behavior as a “chunking” of the results page. And we already have an idea of what the most promising chunks will be when we launch the search. 

It’s this chunking strategy that’s driving the “pinball” behavior in the Nielsen Norman study.  In the Mediative study, it was somewhat surprising to see that users were clicking on a result in about half the time it took in our original 2005 study. We cover more search territory, but thanks to chunking, we do it much more efficiently.

One Last Time: Learn Information Scent

Finally, let me drag out a soapbox I haven’t used for a while. If you really want to understand search interactions, take the time to learn about Information Scent and how our brains follow it (Information Foraging Theory — Pirolli and Card, 1999 — the link to the original study is also broken. *Insert second head smack, this one harder.). 

This is one area where the Nielsen Norman Group and I are totally aligned. In 2003, Jakob Nielsen — the first N in NNG — called the theory “the most important concept to emerge from human-computer interaction research since 1993.”

On that we can agree.