There Are No Short Cuts to Being Human

The Velvet Sundown fooled a lot of people, including millions of fans on Spotify and the writers and editors at Rolling Stone. It was a band that suddenly showed up on Spotify several months ago, with full albums of vintage Americana styled rock. Millions started streaming the band’s songs – except there was no band. The songs, the album art, the band’s photos – it was all generated by AI.

When you know this and relisten to the songs, you swear you would have never been fooled. Those who are now in the know say the music is formulaic, derivative and uninspired. Yet we were fooled, or, at least, millions of us were – taken in by an AI hoax, or what is now euphemistically labelled on Spotify as “a synthetic music project guided by human creative direction and composed, voiced and visualized with the support of artificial intelligence.”

Formulaic. Derivative. Synthetic. We mean these as criticisms. But they are accurate descriptions of exactly how AI works. It is synthesis by formulas (or algorithms) that parse billions or trillions of data points, identify patterns and derive the finished product from it. That is AI’s greatest strength…and its biggest downfall.

The human brain, on the other hand, works quite differently. Our biggest constraint is the limit of our working memory. When we analyze disparate data points, the available slots in our temporary memory bank can be as low as in the single digits. To cognitively function beyond this limit, we have to do two things: “chunk” them together into mental building blocks and code them with emotional tags. That is the human brain’s greatest strength… and again, it’s biggest downfall. What the human brain is best at is what AI is unable to do. And vice versa.

A few posts back when talking about one less-than-impressive experience with an AI tool, I ended by musing what role humans might play as AI evolves and becomes more capable. One possible answer is something labelled “HITL” or “Humans in the Loop.” It plugs the “humanness” that sits in our brains into the equation, allowing AI to do what it’s best at and humans to provide the spark of intuition or the “gut checks” that currently cannot come from an algorithm.

As an example, let me return to the subject of that previous post, building a website. There is a lot that AI could do to build out a website. What it can’t do very well is anticipate how a human might interact with the website. These “use cases” should come from a human, perhaps one like me.

Let me tell you why I believe I’m qualified for the job. For many years, I studied online user behavior quite obsessively and published several white papers that are still cited in the academic world. I was a researcher for hire, with contracts with all the major online players. I say this not to pump my own ego (okay, maybe a little bit – I am human after all) but to set up the process of how I acquired this particular brand of expertise.

It was accumulated over time, as I learned how to analyze online interactions, code eye-tracking sessions, talked to users about goals and intentions. All the while, I was continually plugging new data into my few available working memory slots and “chunking” them into the building blocks of my expertise, to the point where I could quickly look at a website or search results page and provide a pretty accurate “gut call” prediction of how a user would interact with it. This is – without exception – how humans become experts at anything. Malcolm Gladwell called it the “10,000-hour rule.” For humans to add any value “in the loop” they must put in the time. There are no short cuts.

Or – at least – there never used to be. There is now, and that brings up a problem.

Humans now do something called “cognitive off-loading.” If something looks like it’s going to be a drudge to do, we now get Chat-GPT to do it. This is the slogging mental work that our brains are not particularly well suited to. That’s probably why we hate doing it – the brain is trying to shirk the work by tagging it with a negative emotion (brains are sneaky that way). Why not get AI, who can instantly sort through billions of data points and synthesize it into a one-page summary, to do our dirty work for us?

But by off-loading, we short circuit the very process required to build that uniquely human expertise. Writer, researcher and educational change advocate Eva Keiffenheim outlines the potential danger for humans who “off-load” to a digital brain; we may lose the sole advantage we can offer in an artificially intelligent world, “If you can’t recall it without a device, you haven’t truly learned it. You’ve rented the information. We get stuck at ‘knowing about’ a topic, never reaching the automaticity of ‘knowing how.’”

For generations, we’ve treasured the concept of “know how.” Perhaps, in all that time, we forgot how much hard mental work was required to gain it. That could be why we are quick to trade it away now that we can.

Curation is Our Future. But Can You Trust It?

 You can get information from anywhere. But the meaning of that information can come from only one place: you. Everything we take in from the vast ecosystem of information that surrounds us goes through the same singular lens – one crafted by a lifetime of collected beliefs and experiences.

Finding meaning has always been an essentially human activity. Meaning motivates us – it is our operating system. And the ability to create shared meaning can create or crumble societies. We are seeing the consequences of shared meaning play out right now in real time.

The importance of influencing meaning creates an interesting confluence between technology and human behavior. For much of the past two decades, technology has been focusing on filtering and organizing information. But we are now in an era where technology will start curating our information for us. And that is a very different animal.

What does it mean to “curate” an answer, rather than simply present it to you? Curation is more than just collecting and organizing things. The act of curation is to put that information in a context that provides additional value by providing a possible meaning. This crosses the line that delineates just disseminating information from attempting to influence individuals by providing them a meaningful context for that information. 

Not surprisingly, the roots of curation lie – in part – with religion. It comes from the Latin “curare” – “to take care of”. In medieval times, curates were priests who cared for souls. And they cared for souls by providing a meaning that lay beyond the realms of our corporal lives. If you really think about religion, it is one massive juxtaposition of a pre-packaged meaning on the world as we perceive it.

In the future, as we access our world through technology platforms, we will rely on technology to mediate meaning. For example, searches on Google now include an “AI Overview” at the top of the search results The Google Page explaining what the Overview is says it shows up when “you want to quickly understand information from a range of sources, including information from across the web and Google’s Knowledge Graph.” That is Google – or rather Google’s AI – curating an answer for you.

It could be argued that this is just another step to make search more useful – something I’ve been asking for a decade and a half now. In 2010, I said that “search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness.” If AI could begin to provide actionable answers with a high degree of reliability, it would be a major step forward. There are many that say such curated answers could make search obsolete. But we have to ask ourselves, is this curation something we can trust?

With Google, this will probably start as unintentional curation – giving information meaning through a process of elimination. Given how people scan search listings (something I know a fair bit about) it’s reasonable to assume that many searchers will scan no further than the AI Overview, which is at the top of the results page. In that case, you will be spoon-fed whatever meaning happens to be the product of the AI compilation without bothering to qualify it by scanning any further down the results page. This conveyed meaning may well be unintentional, a distillation of the context from whatever sources provided the information. But given that we are lazy information foragers and will only expend enough effort to get an answer that seems reasonable, we will become trained to accept anything that is presented to us “top of page” at face value.

From there it’s not that big a step to intentional curation – presenting information to support a predetermined meaning. Given that pretty much every tech company folded like a cheap suit the minute Trump assumed office, slashing DEI initiatives and aligning their ethics – or lack of – to that of the White House, is it far-fetched to assume that they could start wrapping the information they provide in a “Trump Approved” context, providing us with messaged meaning that supports specific political beliefs? One would hate to think so but based on Facebook’s recent firing of its fact checkers, I’m not sure it’s wise to trust Big Tech to be the arbitrators of meaning.

They don’t have a great track record.

Can OpenAI Make Searching More Useful?

As you may have heard, OpenAI is testing a prototype of a new search engine called SearchGPT. A press release from July 25 notes: “Getting answers on the web can take a lot of effort, often requiring multiple attempts to get relevant results. We believe that by enhancing the conversational capabilities of our models with real-time information from the web, finding what you’re looking for can be faster and easier.”

I’ve been waiting for this for a long time: search that moves beyond relevance to usefulness.  It was 14 years ago that I said this in an interview with Aaron Goldman regarding his book “Everything I Know About Marketing I Learned from Google”:“Search providers have to replace relevancy with usefulness. Relevancy is a great measure if we’re judging information, but not so great if we’re measuring usefulness. That’s why I believe apps are the next flavor of search, little dedicated helpers that allow us to do something with the information. The information itself will become less and less important and the app that allows utilization of the information will become more and more important.”

I’ve felt for almost two decades that the days of search as a destination were numbered. For over 30 years now (Archie, the first internet search engine, was created in 1990), when we’re looking for something online, we search, and then we have to do something with what we find on the results page. Sometimes, a single search is enough — but often, it isn’t. For many of our intended end goals, we still have to do a lot of wading through the Internet’s deep end, filtering out the garbage, picking up the nuggets we need and then assembling those into something useful.

I’ve spent much of those past two decades pondering what the future of search might be. In fact, my previous company wrote a paper on it back in 2007. We were looking forward to what we thought might be the future of search, but we didn’t look too far forward. We set 2010 as our crystal ball horizon. Then we assembled an all-star panel of search design and usability experts, including Marissa Mayer, who was then Google’s vice president of search user experience and interface design, and Jakob Nielsen, principal of the Nielsen Norman Group and the web’s best known usability expert. We asked them what they thought search would look like in three years’ time.

Even back then, almost 20 years ago, I felt the linear presentation of a results page — the 10 blue links concept that started search — was limiting. Since then, we have moved beyond the 10 blue links. A Google search today for the latest IPhone model (one of our test queries in the white paper) actually looks eerily similar to the mock-up we did for what a Google search might look like in the year 2010. It just took Google 14 extra years to get there.

But the basic original premise of search is still there: Do a query, and Google will try to return the most relevant results. If you’re looking to buy an iPhone, it’s probably more useful, mainly due to sponsored content. But it’s still well short of the usefulness I was hoping for.

It’s also interesting to see what directions search has (and hasn’t) taken since then. Mayer talked a lot about interacting with search results. She envisioned an interface where you could annotate and filter your results: “I think that people will be annotating search results pages and web pages a lot. They’re going to be rating them, they’re going to be reviewing them. They’re going to be marking them up, saying ‘I want to come back to this one later.’”

That never really happened. The idea of search as a sticky and interactive interface for the web sort of materialized, but never to the extent that Mayer envisioned.

From our panel, it was Nielsen’s crystal ball that seemed to offer the clearest view of the future: “I think if you look very far ahead, you know 10, 20, 30 years or whatever, then I think there can be a lot of things happening in terms of natural language understanding and making the computer more clever than it is now. If we get to that level then it may be possible to have the computer better guess at what each person needs without the person having to say anything, but I think right now, it is very difficult.”

Nielsen was spot-on in 2007. It’s exactly those advances in natural language processing and artificial intelligence that could allow ChatGPT to now move beyond the paradigm of the search results page and move searching the web into something more useful.

A decade and a half ago, I envisioned an ecosystem of apps that could bridge the gap between what we intended to do and the information and functionality that could be found online.  That’s exactly what’s happening at OpenAI — a number of functional engines powered by AI, all beneath a natural language “chat” interface.

At this point, we still have to “say” what we want in the form of a prompt, but the more we use ChatGPT (or any AI interface) the better it will get to know us. In 2007, when we wrote our white paper on the future of search, personalization was what we were all talking about. Now, with ChatGPT, personalization could come back to the fore, helping AI know what we want even if we can’t put it into words.

As I mentioned in a previous post, we’ll have to wait to see if SearchGPT can make search more useful, especially for complex tasks like planning a vacation, making a major purchase onr planning a big event.

But I think all the pieces are there. The monetization siloes that dominate the online landscape will still prove a challenge to getting all the way to our final destination, but SearchGPT could make the journey faster and a little less taxing.

Note: I still have a copy of our 2007 white paper if anyone is interested. Just email me (email in the contact us page), give me your email and I’ll send you a copy.

Our Disappearing Attention Spans

Last week, Mediapost Editor in Chief Joe Mandese mused about our declining attention spans. He wrote, 

“while in the past, the most common addictive analogy might have been opiates — as in an insatiable desire to want more — these days [consumers] seem more like speed freaks looking for the next fix.”

Mandese cited a couple of recent studies, saying that more than half of mobile users tend to abandon any website that takes longer than three seconds to load. That

“has huge implications for the entire media ecosystem — even TV and video — because consumers increasingly are accessing all forms of content and commerce via their mobile devices.”

The question that begs to be asked here is, “Is a short attention span a bad thing?” The famous comparison is that we are now more easily distracted than a goldfish. But does a shorter attention span negatively impact us, or is it just our brain changing to be a better fit with our environment?

Academics have been debating the impact of technology on our ability to cognitively process things for some time. Journalist Nicholas Carr sounded the warning in his 2010 book, “The Shallows,” where he wrote, 

” (Our brains are) very malleable, they adapt at the cellular level to whatever we happen to be doing. And so the more time we spend surfing, and skimming, and scanning … the more adept we become at that mode of thinking.”

Certainly, Carr is right about the plasticity of our brains. It’s one of the most advantageous features about them. But is our digital environment forever pushing our brains to the shallow end of the pool? Well, it depends. Context is important. One of the biggest factors in determining how we process the information we’re seeing is the device where we’re seeing it.

Back in 2010, Microsoft did a large-scale ethnographic study on how people searched for information on different devices. The researchers found those behaviors differed greatly depending on the platform being used and the intent of the searcher. They found three main categories of search behaviors:

  • Missions are looking for one specific answer (for example, an address or phone number) and often happen on a mobile device.
  • Excavations are widespread searches that need to combine different types of information (for example, researching an upcoming trip or major purchase). They are usually launched on a desktop.
  • Finally, there are Explorations: searching for novelty, often to pass the time. These can happen on all types of devices and can often progress through different devices as the exploration evolves. The initial search may be launched on a mobile device, but as the user gets deeper into the exploration, she may switch to a desktop.

The important thing about this research was that it showed our information-seeking behaviors are very tied to intent, which in turn determines the device used. So, at a surface level, we shouldn’t be too quick to extrapolate behaviors seen on mobile devices with certain intents to other platforms or other intents. We’re very good at matching a search strategy to the strengths and weaknesses of the device we’re using.

But at a deeper level, if Carr is right (and I believe he is) about our constant split-second scanning of information to find items of interest making permanent changes in our brains, what are the implications of this?

For such a fundamentally important question, there is a small but rapidly growing body of academic research that has tried to answer it. To add to the murkiness, many of the studies done contradict each other. The best summary I could find of academia’s quest to determine if “the Internet is making us stupid” was a 2015 article in academic journal The Neuroscientist.

The authors sum up by essentially saying both “yes” — and “no.” We are getting better at quickly filtering through reams of information. We are spending fewer cognitive resource memorizing things we know we can easily find online, which theoretically leaves those resources free for other purposes. Finally, for this post, I will steer away from commenting on multitasking, because the academic jury is still very much out on that one.

But the authors also say that 

“we are shifting towards a shallow mode of learning characterized by quick scanning, reduced contemplation and memory consolidation.”

The fact is, we are spending more and more of our time scanning and clicking. There are inherent benefits to us in learning how to do that faster and more efficiently. The human brain is built to adapt and become better at the things we do all the time. But there is a price to be paid. The brain will also become less capable of doing the things we don’t do as much anymore. As the authors said, this includes actually taking the time to think.

So, in answer to the question “Is the Internet making us stupid?,” I would say no. We are just becoming smart in a different way.

But I would also say the Internet is making us less thoughtful. And that brings up a rather worrying prospect.

As I’ve said many times before, the brain thinks both fast and slow. The fast loop is brutally efficient. It is built to get stuff done in a split second, without having to think about it. Because of this, the fast loop has to be driven by what we already know or think we know. Our “fast” behaviors are necessarily bounded by the beliefs we already hold. It’s this fast loop that’s in control when we’re scanning and clicking our way through our digital environments.

But it’s the slow loop that allows us to extend our thoughts beyond our beliefs. This is where we’ll find our “open minds,” if we have such a thing. Here, we can challenge our beliefs and, if presented with enough evidence to the contrary, willingly break them down and rebuild them to update our understanding of the world. In the sense-making loop, this is called reframing.

The more time we spend “thinking fast” at the expense of “thinking slow,” the more we will become prisoners to our existing beliefs. We will be less able to consolidate and consider information that lies beyond those boundaries. We will spend more time “parsing” and less time “pondering.” As we do so, our brains will shift and change accordingly.

Ironically, our minds will change in such a way to make it exceedingly difficult to change our minds.

Just in Time for Christmas: More Search Eye-Tracking

The good folks over at the Nielsen Norman Group have released a new search eye tracking report. The findings are quite similar to one my former company — Mediative — did a number of years ago (this link goes to a write-up about the study. Unfortunately, the link to the original study is broken. *Insert head smack here).

In the Nielsen Norman study, the two authors — Kate Moran and Cami Goray — looked at how a more visually rich and complex search results page would impact user interaction with the page. The authors of the report called the sum of participant interactions a “Pinball Pattern”: “Today, we find that people’s attention is distributed on the page and that they process results more nonlinearly than before. We observed so much bouncing between various elements across the page that we can safely define a new SERP-processing gaze pattern — the pinball pattern.”

While I covered this at some length when the original Mediative report came out in 2014 (in three separate columns: 1,2 & 3), there are some themes that bear repeating. Unfortunately, I found the study’s authors missed what I think are some of the more interesting implications. 

In the days of the “10 Blue Links” search results page, we used the same scanning strategy no matter what our intent was. In an environment where the format never changes, you can afford to rely on a stable and consistent strategy. 

In our first eye tracking study, published in 2004, this consistent strategy led to something we called the Golden Triangle. But those days are over.

Today, when every search result can look a little bit different, it comes as no surprise that every search “gaze plot” (the path the eyes take through the results page) will also be different. Let’s take a closer look at the reasons for this. 

SERP Eye Candy

In the Nielsen Norman study, the authors felt “visual weighting” was the main factor in creating the “Pinball Pattern”: “The visual weight of elements on the page drives people’s scanning patterns. Because these elements are distributed all over the page and because some SERPs have more such elements than others, people’s gaze patterns are not linear. The presence and position of visually compelling elements often affect the visibility of the organic results near them.”

While the visual impact of the page elements is certainly a factor, I think it’s only part of the answer. I believe a bigger, and more interesting, factor is how the searcher’s brain and its searching strategies have evolved in lockstep with a more visually complex results page. 

The Importance of Understanding Intent

The reason why we see so much variation in scan patterns is that there is also extensive variation in searchers’ intent. The exact same search query could be used by someone intent on finding an online or physical place to purchase a product, comparing prices on that product, looking to learn more about the technical specs of that product, looking for how-to videos on the use of the product, or looking for consumer reviews on that product.

It’s the same search, but with many different intents. And each of those intents will result in a different scanning pattern. 

Predetermined Page Visualizations

I really don’t believe we start each search page interaction with a blank slate, passively letting our eyes be dragged to the brightest, shiniest object on the page. I think that when we launch the search, our intent has already created an imagined template for the page we expect to see. 

We have all used search enough to be fairly accurate at predicting what the page elements might be: thumbnails of videos or images, a map showing relevant local results, perhaps a Knowledge Graph result in the lefthand column. 

Yes, the visual weighting of elements act as an anchor to draw the eye, but I believe the eye is using this anticipated template to efficiently parse the results page. 

I have previously referred to this behavior as a “chunking” of the results page. And we already have an idea of what the most promising chunks will be when we launch the search. 

It’s this chunking strategy that’s driving the “pinball” behavior in the Nielsen Norman study.  In the Mediative study, it was somewhat surprising to see that users were clicking on a result in about half the time it took in our original 2005 study. We cover more search territory, but thanks to chunking, we do it much more efficiently.

One Last Time: Learn Information Scent

Finally, let me drag out a soapbox I haven’t used for a while. If you really want to understand search interactions, take the time to learn about Information Scent and how our brains follow it (Information Foraging Theory — Pirolli and Card, 1999 — the link to the original study is also broken. *Insert second head smack, this one harder.). 

This is one area where the Nielsen Norman Group and I are totally aligned. In 2003, Jakob Nielsen — the first N in NNG — called the theory “the most important concept to emerge from human-computer interaction research since 1993.”

On that we can agree.

Data does NOT Equal People

We marketers love data. We treat it like a holy grail: a thing to be worshipped. But we’re praying at the wrong altar. Or, at the very least, we’re praying at a misleading altar.

Data is the digital residue of behavior. It is the contrails of customer intent — a thin, wispy proxy for the rich bandwidth of the real world. It does have a purpose, but it should be just one tool in a marketer’s toolbox. Unfortunately, we tend to use it as a Swiss army knife, thinking it’s the only tool we need.

The problem is that data is seductive. It’s pliable and reliable, luring us into manipulation because it’s so easy to do. It can be twisted and molded with algorithms and spreadsheets.

But it’s also sterile. There is a reason people don’t fit nicely into spreadsheets. There are simply not enough dimensions and nuances to accommodate real human behavior.

Data is great for answering the questions “what,” “who,” “when” and “where.” But they are all glimpses of what has happened. Stopping here is like navigating through the rear-view mirror.

Data seldom yields the answer to “why.” But it’s why that makes the magic happen, that gives us an empathetic understanding that helps us reliably predict future behaviors.

Uncovering the what, who, when and where makes us good marketers. But it’s “why” that makes us great. It’s knowing why that allows us to connect the distal dots, hacking out the hypotheses that can take us forward in the leaps required by truly great marketing. As Tom Goodwin, the author of “Digital Darwinism,” said in a recent post, “What digital has done well is have enough of a data trail to claim, not create, success.”

We as marketers have to resist stopping at the data. We have to keep pursuing why.

Here’s one example from my own experience. Some years ago, my agency did an eye-tracking study that looked at gender differences in how we navigate websites.

For me, the most interesting finding to fall out of the data was that females spent a lot more time than males looking at a website’s “hero” shot, especially if it was a picture that had faces in it. Males quickly scanned the picture, but then immediately moved their eyes up to the navigation menu and started scanning the options there. Females lingered on the graphic and then moved on to scan text immediately adjacent to it.

Now, I could have stopped at “who” and “what,” which in itself would have been a pretty interesting finding. But I wanted to know “why.” And that’s where things started to get messy.

To start to understand why, you have to rely on feelings and intuition. You also have to accept that you probably won’t arrive at a definitive answer. “Why” lives in the realm of “wicked” problems, which I defined in a previous column as “questions that can’t be answered by yes or no — the answer always seems to be maybe.  There is no linear path to solve them. You just keep going in loops, hopefully getting closer to an answer but never quite arriving at one. Usually, the optimal solution to a wicked problem is ‘good enough – for now.’”

The answer to why males scan a website differently than females is buried in a maze of evolutionary biology, social norms and cognitive heuristics. It probably has something to do with wayfinding strategies and hardwired biases. It won’t just “fall out” of data because it’s not in the data to begin with.

Even half-right “why” answers often take months or even years of diligent pursuit to reveal themselves. Given that, I understand why it’s easier to just focus on the data. It will get you to “good,” and maybe that’s enough.

Unless, of course, you’re aiming to “put a ding in the universe,” as Steve Jobs said in an inspirational commencement speech at Stanford University. Then you have to shoot for great.

Search and The Path to Purchase

Just how short do we want the path to purchase to be anyway?

A few weeks back Mediapost reporter Laurie Sullivan brought this question up in her article detailing how Instagram is building ecom into their app. While Instagram is not usually considered a search platform, Sullivan muses on the connecting of two dots that seem destined to be joined: search and purchase. But is that a destiny that users can “buy into?”

Again, this is one of those questions where the answer is always, “It depends.”  And there are at least a few dependencies in this case.

The first is whether our perspective is as a marketer or a consumer. Marketers always want the path to purchase to be as short as possible. When we have that hat on, we won’t be fully satisfied until the package hits our front step about the same time we first get the first mental inkling to buy.

Amazon has done the most to truncate the path to purchase. Marketers look longingly at their one click ordering path – requiring mere seconds and a single click to go from search to successful fulfillment. If only all purchases were this streamlined, the marketer in us muses.

But if we’re leading our double life as a consumer, there is a second “It depends…”  And that is dependent on what our shopping intentions are. There are times when we – as consumers – also want to fastest possible path to purchase. But that’s not true all the time.

Back when I was looking at purchase behaviors in the B2B world, I found that there are variables that lead to different intentions on the part of the buyer. Essentially, it boils down to the degree of risk and reward in the purchase itself. I first wrote about this almost a decade ago now.

If there’s a fairly high degree of risk inherent in the purchase itself, the last thing we want is a frictionless path to purchase. These are what we call high consideration purchases.

We want to take our time, feeling that we’ve considered all the options. One click ordering scares the bejeezus out of us.

Let’s go back to the Amazon example. Today, Amazon is the default search engine of choice for product searches, outpacing Google by a margin rapidly approaching double digits. But this is not really an apples to apples comparison. We have to factor in the deliberate intention of the user. We go to Amazon to buy, so a faster path to purchase is appropriate. We go to Google to consider. And for reasons I’ll get into soon, we would be less accepting of a “buy” button there.

The buying paths we would typically take in a social platform like Instagram are probably not that high risk, so a fast path to purchase might be fine. But there’s another factor that we need to consider when shortening the path to purchase – or buiding a path in the first place – in what has traditionally been considered a discovery platform. Let’s call it a mixing of motives.

Google has been dancing around a shorter path to purchase for years now. As Sullivan said in her article, “Search engines have strength in what’s known as discovery shopping, but completing the transaction has never been a strong point — mainly because brands decline to give up the ownership of the data.”

Data ownership is one thing, but even if the data were available, including a “buy now” button in search results can also lead to user trust issues. For many purchases, we need to feel that our discovery engine has no financial motive in the ordering of their search results. This – of course – is a fallacy we build in our own minds. There is always a financial motive in the ordering of search results. But as long as it’s not overt, we can trick ourselves into living with it. A “buy now” button makes it overt.

This problem of mixed motives is not just a problem of user perception. It also can lead publishers down a path that leaves objectivity behind and pursues higher profits ahead. One example is TripAdvisor. Some years ago, they made the corporate decision to parlay their strong position as a travel experience discovery platform into an instant booking platform. In the beginning, they separated this booking experience onto its own platform under the brand Viator. Today, the booking experience has been folded into the main TripAdvisor results and – more disturbingly – is now the default search order. Every result at the top of the page has a “Book Now” button.

Speaking as a sample of one, I trust TripAdvisor a lot less than I used to.

 

How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.

In Search- Even in Mobile – Organic Still Matters

I told someone recently that I feel like Rick Astley. You know, the guy that had the monster hit “Never Gonna Give You Up” in 1987 and is still trading on it almost 30 years later? He even enjoyed a brief resurgence of viral fame in 2007 when the world discovered what it meant to be “Rickrolled”

google-golden-triangle-eye-trackingFor me, my “Never Gonna Give You Up” is the Golden Triangle eye tracking study we released in 2005. It’s my one hit wonder (to be fair to Astley, he did have a couple other hits, but you get the idea). And yes, I’m still talking about it.

The Golden Triangle as we identified it existed because people were drawn to look at the number one organic listing. That’s an important thing to keep in mind. In today’s world of ad blockers and teeth gnashing about the future of advertising, there is probably no purer or more controllable environment than the search results page. Creativity is stripped to the bare minimum. Ads have to be highly relevant and non-promotional in nature. Interaction is restricted to the few seconds required to scan and click. If there was anywhere where ads might be tolerated, its on the search results page

But…

If we fully trusted ads – especially those as benign as those that show up on search results – there would have be no Golden Triangle. It only existed because we needed to see that top Organic result and dragging our eyes down to it formed one side of the triangle.

eyetracking2014Fast forward almost 10 years. Mediative, which is the current incarnation of my old company, released a follow up two years ago. While the Golden Triangle had definitely morphed into a more linear scan, the motivation remained – people wanted to scan down to see at least one organic listing. They didn’t trust ads then. They don’t trust ads now.

Google has used this need to anchor our scanning with the top organic listing to introduce a greater variety of results into the top “hot zone” – where scanning is the greatest. Now, depending on the search, there is likely to be at least a full screen of various results – including ads, local listings, reviews or news items – before your eyes hit that top organic web result. Yet, we seem to be persistent in our need to see it. Most people still make the effort to scroll down, find it and assess its relevance.

It should be noted that all of the above refers to desktop search. But almost a year ago, Google announced that – for the first time ever – more searches happened on a mobile device than on a desktop.

eyetracking mobile.pngMediative just released a new eye-tracking study (Note: I was not involved at all with this one). This time, they dove into scan patterns on mobile devices. Given the limited real estate and the fact that for many popular searches, you would have to consciously scroll down at least a couple times to see the first organic result, did users become more accepting of ads?

Nope. They just scanned further down!

The study’s first finding was that the #1 organic listing still captures the most click activity, but it takes users almost twice as long to find it compared to a desktop.

The study’s second finding was that even though organic is still important, position matters more than ever. Users will make the effort to find the top organic result and, once they do, they’ll generally scan the top 4 results, but if they find nothing relevant, they probably won’t scan much further. In the study, 92.6% of the clicks happened above the 4th organic listing. On a desktop, 84% of the clicks happened above the number 4 listing.

The third listing shows an interesting paradox that’s emerging on mobile devices: we’re carrying our search habits from the desktop over with us – especially our need to see at least one organic listing. The average time to scan the top sponsored listing was only 0.36 seconds, meaning that people checked it out immediately after orienting themselves to the mobile results page, but for those that clicked the listing, the average time to click was 5.95 seconds. That’s almost 50% longer than the average time to click on a desktop search. When organic results are pushed down the page because of other content, it’s taking us longer before we feel confident enough to make our choice. We still need to anchor our relevancy assessment with that top organic result and that’s causing us to be less efficient in our mobile searches than we are on the desktop.

The study also indicated that these behaviors could be in flux. We may be adapted our search strategies for mobile devices, but we’re just not quite there yet. I’ll touch on this in next week’s column.

 

 

 

 

 

 

 

 

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.