A Possibly Premature Post-Mortem on Yahoo

Last Thursday, Yahoo held it ‘s annual shareholder meeting. At that meeting, CEO Marissa Mayer dealt the company a doubled down kiss of death. She stated the goals of the board are fully aligned with one clear priority: “delivering shareholder value to all of you.” She further mentioned, when dealing with the divesture of all that once was Yahoo, that she’s “been very heartened by the level of interest in Yahoo. It validates our business processes as well as our achievements to date.”

It’s fancier language, but it’s basically the same as the butcher saying, “This cow is no longer viable as a cow, so I’m looking at it as a collection of rump roasts, T-Bones and hamburger. I’m hoping we have more of the former and less of the later.”

Yahoo_1996I first encountered Yahoo in 1995, shortly after it’s brief life as Jerry and David’s Guide to the World Wide Web. I think it was probably still parked on Stanford’s servers at the time. At the time, the Internet was like the world’s biggest second-hand store – a huge collection that was 95% junk/5 % useful stuff with no overarching order or organization. David Filo and Jerry Yang’s site was one of the very first to try to provide that order.

As an early search marketer in the run up to the dot-com bubble, you couldn’t ignore the Yahoo directory. The Yahooligans walked with typical Valley swagger. Hubris was never in short supply. They were the cocks of the walk and they knew it.

It was a much-humbled post-bubble Yahoo that I visited in 2004. They had got their search asses soundly kicked by Google, who was now powering their non-directory results. The age of the curated directory was gone, replaced by the scalability of algorithmic search.

As a culture, the Yahooligans were struggling with the mixed management signals that came from then CEO Terry Semel and his team. Sunnyvale was clouded in a purple haze. The Yahooligans didn’t know who the hell they were or what they were supposed to do. Where they a tech company or an entertainment company? The answer, as it turned out, was neither.

I met with the remnants of the once mighty search team to talk about user behaviors. I didn’t know it at the time, but Yahoo was gearing up to relaunch their search service. A much vilified paid inclusion program would also be debuted. It was one of many ill-fated attempts to find the next “Big Thing.”

Marissa Mayer continues to put a brave face on it, but the Yahoo engine ran out of steam at least a decade and a half ago. What amazes me is how long the ride has been. There is a message here for tech-based companies.

If you dig down to the critical incubation period of any tech company, you find a recurring pattern. Some technologically mediated connection allows people to do something they were previously unable to do. This releases pent up market demand. It’s like a thin sliver trying to poke through a water balloon. If successful, this released market demand creates an immediate and sizable audience for whomever introduced the innovation. Yahoo’s directory, Google’s PageRank, Facebook’s “Facemash”, AirBnB’s accommodation directory, Uber’s ridesharing app – they all share the same modus operandi – a tech-step forward creates a new audience and market opportunity.

In hindsight, once you strip away all the hype, it’s amazing how tenuous and unimpressive these technological advances are. Luck and timing typically play a huge part. If the conditions are right, the sliver eases through the balloon’s membrane and for a time, there is a steady stream of opportunity.

The problem is that is that as easily as these markets form, they can just as easily evaporate. When the technological advantage passes to the next competitor, as it did when Yahoo gave way to Google, all that’s left is the audience. When you consider that Yahoo has been coasting on this audience for close to two decades, it’s rather amazing that Mayer still has any assets at all to sell.

 

How We Might Search (On the Go)

As I mentioned in last week’s column – Mediative has just released a new eyetracking study on mobile devices. And it appears that we’re still conditioned to look for the number one organic result before clicking on our preferred destination.

But…

It appears that things might be in the process of changing. This makes sense. Searching on a mobile device is – and should be – significantly different from searching on a desktop. We have different intents. We are interacting with a different platform. Even the way we search is different.

Searching on a desktop is all about consideration. It’s about filtering and shortlisting multiple options to find the best one. Our search strategies are still carrying a significant amount of baggage from what search was – an often imperfect way to find the best place to get more information about something. That’s why we still look for the top organic listing. For some reason we still subconsciously consider this the gold standard of informational relevancy. We measure all other results against it. That’s why we make sure we reserve one slot from the three to five available in our working memory (I have found that the average person considers about 4 results at a time) for its evaluation.

But searching on a mobile device isn’t about filtering content. For one thing, it’s absolutely the wrong platform to do this with. The real estate is too limited. For another, it’s probably not what we want to spend our time doing. We’re on the go and trying to get stuff done. This is not the time for pausing and reflecting. This is the time to find what I’m looking for and use it to take action.

This all makes sense but the fact remains that the way we search is a product of habit. It’s a conditioned subconscious strategy that was largely formed on the desktop. Most of us haven’t done enough searching on mobile devices yet to abandon our desktop-derived strategies and create new mobile specific ones. So, our subconscious starts playing out the desktop script and only varies from it when it looks like it’s not going to deliver acceptable results. That’s why we’re still looking for that number one organic listing to benchmark against

There were a few findings in the Mediative study that indicate that our desktop habits may be starting to slip on mobile devices. But before we review them, let’s do a quick review of how habits play out. Habits are the brains way of cutting down on thinking. If we do something over and over again and get acceptable results, we store that behavior as a habit. The brain goes on autopilot so we don’t have to think our way through a task with predictable outcomes.

If, however, things change, either in the way the task plays out or in the outcomes we get, the brain reluctantly takes control again and starts thinking its way through the task. I believe this is exactly what’s happening with our mobile searches. The brain desperately wants to use its desktop habits, but the results are falling below our threshold of acceptability. That means we’re all somewhere in the process of rebuilding a search strategy more suitable for a mobile device.

Mediative’s study shows me a brain that’s caught between the desktop searches we’ve always done and the mobile searches we’d like to do. We still feel we should scroll to see at least the top organic result, but as mobile search results become more aligned with our intent, which is typically to take action right away, we are being side tracked from our habitual behaviors and kicking our brains into gear to take control. The result is that when Google shows search elements that are probably more aligned with our intent – either local results, knowledge graphs or even highly relevant ads with logical ad extensions (such as a “call” link) – we lose confidence in our habits. We still scroll down to check out the organic result but we probably scroll back up and click on the more relevant result.

All this switching back and forth from habitual to engaged interaction with the results ends up exacting a cost in terms of efficiency. We take longer to conduct searches on a mobile device, especially if that search shows other types of results near the top. In the study, participants spent an extra 2 seconds or so scrolling between the presented results (7.15 seconds for varied results vs. 4.95 seconds for organic only results). And even though they spent more time scrolling, more participants ended up clicking on the mobile relevant results they saw right at the top.

The trends I’m describing here are subtle – often playing out in a couple seconds or less. And you might say that it’s no big deal. But habits are always a big deal. The fact that we’re still relying on desktop habits that were laid down over the past two decades show how persistent then can be. If I’m right and we’re finally building new habits specific to mobile devices, those habits could dictate our search behaviors for a long time to come.

In Search- Even in Mobile – Organic Still Matters

I told someone recently that I feel like Rick Astley. You know, the guy that had the monster hit “Never Gonna Give You Up” in 1987 and is still trading on it almost 30 years later? He even enjoyed a brief resurgence of viral fame in 2007 when the world discovered what it meant to be “Rickrolled”

google-golden-triangle-eye-trackingFor me, my “Never Gonna Give You Up” is the Golden Triangle eye tracking study we released in 2005. It’s my one hit wonder (to be fair to Astley, he did have a couple other hits, but you get the idea). And yes, I’m still talking about it.

The Golden Triangle as we identified it existed because people were drawn to look at the number one organic listing. That’s an important thing to keep in mind. In today’s world of ad blockers and teeth gnashing about the future of advertising, there is probably no purer or more controllable environment than the search results page. Creativity is stripped to the bare minimum. Ads have to be highly relevant and non-promotional in nature. Interaction is restricted to the few seconds required to scan and click. If there was anywhere where ads might be tolerated, its on the search results page

But…

If we fully trusted ads – especially those as benign as those that show up on search results – there would have be no Golden Triangle. It only existed because we needed to see that top Organic result and dragging our eyes down to it formed one side of the triangle.

eyetracking2014Fast forward almost 10 years. Mediative, which is the current incarnation of my old company, released a follow up two years ago. While the Golden Triangle had definitely morphed into a more linear scan, the motivation remained – people wanted to scan down to see at least one organic listing. They didn’t trust ads then. They don’t trust ads now.

Google has used this need to anchor our scanning with the top organic listing to introduce a greater variety of results into the top “hot zone” – where scanning is the greatest. Now, depending on the search, there is likely to be at least a full screen of various results – including ads, local listings, reviews or news items – before your eyes hit that top organic web result. Yet, we seem to be persistent in our need to see it. Most people still make the effort to scroll down, find it and assess its relevance.

It should be noted that all of the above refers to desktop search. But almost a year ago, Google announced that – for the first time ever – more searches happened on a mobile device than on a desktop.

eyetracking mobile.pngMediative just released a new eye-tracking study (Note: I was not involved at all with this one). This time, they dove into scan patterns on mobile devices. Given the limited real estate and the fact that for many popular searches, you would have to consciously scroll down at least a couple times to see the first organic result, did users become more accepting of ads?

Nope. They just scanned further down!

The study’s first finding was that the #1 organic listing still captures the most click activity, but it takes users almost twice as long to find it compared to a desktop.

The study’s second finding was that even though organic is still important, position matters more than ever. Users will make the effort to find the top organic result and, once they do, they’ll generally scan the top 4 results, but if they find nothing relevant, they probably won’t scan much further. In the study, 92.6% of the clicks happened above the 4th organic listing. On a desktop, 84% of the clicks happened above the number 4 listing.

The third listing shows an interesting paradox that’s emerging on mobile devices: we’re carrying our search habits from the desktop over with us – especially our need to see at least one organic listing. The average time to scan the top sponsored listing was only 0.36 seconds, meaning that people checked it out immediately after orienting themselves to the mobile results page, but for those that clicked the listing, the average time to click was 5.95 seconds. That’s almost 50% longer than the average time to click on a desktop search. When organic results are pushed down the page because of other content, it’s taking us longer before we feel confident enough to make our choice. We still need to anchor our relevancy assessment with that top organic result and that’s causing us to be less efficient in our mobile searches than we are on the desktop.

The study also indicated that these behaviors could be in flux. We may be adapted our search strategies for mobile devices, but we’re just not quite there yet. I’ll touch on this in next week’s column.

 

 

 

 

 

 

 

 

The Face of Disruption

If you ask publishing giant Elsevier, Alexandra Elbakyan is a criminal – a pernicious pirate.

If you ask the Lifeboat Foundation, or blogger P.Z Myers, or millions of students around the world, Alexandra Elbakyan is a hero.

Labels can be tricky things, especially in a world of disruption.

ElboykanMs. Elbakyan certainly doesn’t look like a criminal. You would walk right past her on a campus quad and think nothing of it. She looks pretty much what you would expect a post-grad neuroscience student from Kazakhstan to look like.

But her face is the face of disruption. And she’s at the receiving end of a lawsuit launched by Elsevier that, if you were to take it seriously, would be worth several billion dollars.

Just over a year ago, I wrote a column about the academic journal racket. The work of thousands of researchers is published by Elsevier and others and remains locked behind hugely expensive pay walls. Elbakyan, as a post-grad research student at a university that couldn’t afford to pay the licensing fees to gain access to these journals, got frustrated. In a letter she wrote in response to the lawsuit, she elaborated on this frustration:

“When I was a student in Kazakhstan University, I did not have access to any research papers. These papers I needed for my research project. Payment of 32 dollars is just insane when you need to skim or read tens or hundreds of these papers to do research. I obtained these papers by pirating them.”

Elbaykan was not alone in this piracy.

“Later I found there are lots and lots of researchers (not even students, but university researchers) just like me, especially in developing countries. They created online communities (forums) to solve this problem.”

“…to solve this problem.” There, in a nutshell, is the source of disruption. Elbakyan thought there had to be a more efficient way to facilitate this communal piracy and turned to technology, launching the Sci-Hub search portal in 2011. Depending on the donation of access keys from academics at institutions that had subscriptions to research publishers, Sci-Hub bypasses the paywall and locates the paper a researcher is looking for. It then delivers the paper and saves a copy for LibGen, a library of “pirated” papers that will continue to be freely available to future researchers. The LibGen database now has over 48 million papers available.

Is Elbaykan guilty of piracy? Absolutely – as it’s defined by the law. She makes no bones about the fact. She uses the term repeatedly in her own letter of defense.

But, in that letter, Alexandra Elbaykan also appeals to a higher law – the law of fairness. She is not stealing from the authors of that research, who receive no compensation for their work from the publisher. When Elsevier claims “irreparable harm” the only harm that can be identified is to their own business model. There is no harm to academics, who are becoming increasingly hostile to the business practices of publishers like Elsevier. There is certainly no harm to fellow researchers, who now have open access to knowledge, helping them in their own work. And there is no harm to the public, who can only benefit from the more open sharing of knowledge amongst academics. The only one hurt here is Elsevier.

According to RELX’s (the parent company of Elsevier) 2014 annual report, the company raked in £ 2,944 M ($4.23 billion US) from it’s various subscription businesses. The Scientific, Technical and Medical division (the same division that Elbaykan “irreparably harmed”) had revenues of £ 2,048 M ($2.94 B US) and a tidy little operating profit of £787 M ($ 1.13 B US).

Poor Elsevier.

The question that should be asked here is not whether Elsevier’s business model has been harmed, but rather, does it deserve to live? According to that same annual report, they “help scientists make new discoveries, lawyers win cases, doctors save lives and executives forge commercial relationships with their clients.”

Actually, no.

Elsevier does none of those things. The information they deal in does those things. And that same information is finding a way to be free, thanks to people like Alexandra Elbaykan. Elsevier is just the middleman who is being cut out of the supply chain through technology.

The American legal system will undoubtedly side with Elsevier. The law, as it is currently written, defends the right of a corporation to do business, whether or not people like you and me deem that business ethical. But ultimately, we rely on our laws to be fair, and what is fair depends on the context of our society. That context can be changed through the forces of disruption.

Sometimes, disruption comes in the guise of a young post grad student from Kazakhstan.

How Activation Works in an Absolute Value Market

As I covered last week, if I mention a brand to you – like Nike, for instance – your brain immediately pulls back your own interpretation of the brand. What has happened, in a split second, is that the activation of that one node – let’s call it the Nike node – triggers the activation of several related nodes in your brain, which is quickly assembled into a representation of the brand Nike. This is called Spreading Activation.

This activation is all internal. It’s where most of the efforts of advertising have been focused over the past several decades. Advertising’s job has been to build a positive network of associations so when that prime happens, you have a positive feeling towards the brand. Advertising has been focused on winning territory in this mental landscape.

Up to now, we have been restricted to this internal landscape when making consumer decisions by the boundaries of our own rationality. Access to reliable and objective information about possible purchases was limited. It required more effort on our part than we were willing to expend. So, for the vast majority of purchases, these internal representations were enough for us. They acted as a proxy for information that lay beyond our grasp.

But the world has changed. For almost any purchase category you can think of, there exists reliable, objective information that is easy to access and filter. We no longer are restricted to internal brand activations (relative values based on our own past experiences and beliefs). Now, with a few quick searches, we can access objective information, often based on the experiences of others. In their book of the same name, Itimar Simonson and Emanuel Rosen call these sources “Absolute Value.” For more and more purchases, we turn to external sources because we can. The effort invested is more than compensated for the value returned. In the process, the value of traditional branding is being eroded. This is truer for some product categories than others. The higher the risk or the level of interest, the more the prospect will engage in an external activation. But across all product categories, there has been a significant shift from the internal to the external.

What this means for advertising is that we have to shift our focus from internal spreading activations to external spreading activations. Now, when we retrieve an internal representation of a product or brand, it typically acts as a starting point, not the end point. That starting point is then to be modified or discarded completely depending on the external information we access. The first activated node is our own initial concept of the product, but the subsequent nodes are spread throughout the digitized information landscape.

In an internal spreading activation, the nodes activated and the connections between those nodes are all conducted at a subconscious level. It’s beyond our control. But an external spreading activation is a different beast. It’s a deliberate information search conducted by the prospect. That means that the nodes accessed and the connections between those nodes becomes of critical importance. Advertisers have to understand what those external activation maps look like. They have to be intimately aware of the information nodes accessed and the connections used to get to those nodes. They also have to be familiar with the prospect’s information consumption preferences. At first glance, this seems to be an impossibly complex landscape to navigate. But in practice, we all tend to follow remarkable similar paths when establishing our external activation networks. Search is often the first connector we use. The nodes accessed and the information within those nodes follow predictable patterns for most product categories.

For the advertiser, it comes down to a question of where to most profitably invest your efforts. Traditional advertising was built on the foundation of controlling the internal activation. This was the psychology behind classic treatises such as Ries and Trout’s “Positioning, The Battle for Your Mind.” And, in most cases, that battle was won by whomever could assemble the best collection of smoke and mirrors. Advertising messaging had very little to do with facts and everything to do with persuasion.

But as Simonsen and Rosen point out, the relative position of a brand in a prospect’s mind is becoming less and less relevant to the eventual purchase decision. Many purchases are now determined by what happens in the external activation. Factual, reliable information and easy access to that information becomes critical. Smoke and mirrors are relegated to advertising “noise” in this scenario. The marketer with a deep understanding of how the prospect searches for and determines what the “truth” is about a potential product will be the one who wins. And traditional marketing is becoming less and less important to that prospect.

 

Why More Connectivity is Not Just More – Why More is Different

data-brain_SMEric Schmidt is predicting from Davos that the Internet will disappear. I agree. I’ve always said that Search will go under the hood, changing from a destination to a utility. Not that Mr. Schmidt or the Davos crew needs my validation. My invitation seems to have got lost in the mail.

Laurie Sullivan’s recent post goes into some of the specifics of how search will become an implicit rather than an explicit utility. Underlying this is a pretty big implication that we should be aware of – the very nature of connectivity will change. Right now, the Internet is a tool, or resource. We access it through conscious effort. It’s a “task at hand.” Our attention is focused on the Internet when we engage with it. The world described by Eric Schmidt and the rest of the panel is much, much different.   In this world, the “Internet of Things” creates a connected environment that we exist in. And this has some pretty important considerations for us.

First of all, when something becomes an environment, it surrounds us. It becomes our world as we interpret it through our assorted sensory inputs. These inputs have evolved to interpret a physical world – an environment of things. We will need help interpreting a digital world – an environment of data. Our reality, or what we perceive our reality to be, will change significantly as we introduce technologically mediated inputs into it.

Our brains were built to parse information from a physical world. We have cognitive mechanisms that evolved to do things like keep us away from physical harm. Our brains were never intended to crunch endless reams of digital data. So, we will have to rely on technology to do that for us. Right now we have an uneasy alliance between our instincts and the capabilities of machines. We are highly suspicious of technology. There is every rational reason in the world to believe that a self-driving Google car will be far safer than a two ton chunk of accelerating metal under the control of a fundamentally flawed human, but who of us are willing to give up the wheel? The fact is, however, that if we want to function in the world Schmidt hints at, we’re going to have to learn not only to trust machines, but also to rely totally on them.

The other implication is one of bandwidth. Our brains have bottlenecks. Right now, our brain together with our senses subconsciously monitor our environment and, if the situation warrants, they wake up our conscious mind for some focused and deliberate processing. The busier our environment gets, the bigger this challenge becomes. A digitally connected environment will soon exceed our brain’s ability to comprehend and process information. We will have to determine some pretty stringent filtering thresholds. And we will rely on technology to do the filtering. As I said, our physical senses were not built to filter a digital world.

It will be an odd relationship with technology that will have to develop. Even if we lower our guard on letting machines do much of our “thinking” (in terms of processing environmental inputs for us) we still have to learn how to give machines guidelines so they know what our intentions are. This raises the question, “How smart do we want machines to become?” Do we want machines that can learn about us over time, without explicit guidance from us? Are we ready for technology that guesses what we want?

One of the comments on Laurie’s post was from Jay Fredrickson, “Sign me up for this world, please. When will this happen and be fully rolled out? Ten years? 20 years?” Perhaps we should be careful what we wish for.  While this world may seem to be a step forward, we will actually be stepping over a threshold into a significantly different reality. As we step over that threshold, we will change what it means to be human. And there will be no stepping back.

Evolved Search Behaviors: Take Aways for Marketers

In the last two columns, I first looked at the origins of the original Golden Triangle, and then looked at how search behaviors have evolved in the last 9 years, according to a new eye tracking study from Mediative. In today’s column, I’ll try to pick out a few “so whats” for search marketers.

It’s not about Location, It’s About Intent

In 2005, search marketing as all about location. It was about grabbing a part of the Golden Triangle, and the higher, the better. The delta between scanning and clicks from the first organic result to the second was dramatic – by a factor of 2 to 1! Similar differences were seen in the top paid results. It’s as if, given the number of options available on the page (usually between 12 and 18, depending on the number of ads showing) searchers used position as a quick and dirty way to filter results, reasoning that the higher the result, the better match it would be to their intent.

In 2014, however, it’s a very different story. Because the first scan is now to find the most appropriate chunk, the importance of being high on the page is significantly lessened. Also, once the second step of scanning has begun, within a results chunk, there seems to be more vertical scanning within the chunk and less lateral scanning. Mediative found that in some instances, it was the third or fourth listing in a chunk that attracted the most attention, depending on content, format and user intent. For example, in the heat map shown below, the third organic result actually got as many clicks as the first, capturing 26% of all the clicks on the page and 15% of the time spent on page. The reason could be because it was the only listing that had the Google Ratings Rich Snippet because of the proper use of structured data mark up. In this case, the information scent that promised user reviews was a strong match with user intent, but you would only know this if you knew what that intent was.

Google-Ford-Fiesta

This change in user search scanning strategies makes it more important than ever to understand the most common user intents that would make them turn to a search engine. What will be the decision steps they go through and at which of those steps might they turn to a search engine? Would it be to discover a solution to an identified need, to find out more about a known solution, to help build a consideration set for direct comparisons, to look for one specific piece of information (ie a price) or to navigate to one particular destination, perhaps to order online? If you know why your prospects might use search, you’ll have a much better idea of what you need to do with your content to ensure you’re in the right place at the right time with the right content.  Nothing shows this clearer than the following comparison of heat maps. The one on the left was the heat map produced when searchers were given a scenario that required them to gather information. The one on the right resulted from a scenario where searchers had to find a site to navigate to. You can see the dramatic difference in scanning behaviors.

Intent-compared-2

If search used to be about location, location, location, it’s now about intent, intent, intent.

Organic Optimization Matters More than Ever!

Search marketers have been saying that organic optimization has been dying for at least two decades now, ever since I got into this industry. Guess what? Not only is organic optimization not dead, it’s now more important than ever! In Enquiro’s original 2005 study, the top two sponsored ads captured 14.1% of all clicks. In Mediative’s 2014 follow up, the number really didn’t change that much, edging up to 14.5% What did change was the relevance of the rest of the listings on the page. In 2005, all the organic results combined captured 56.7% of the clicks. That left about 29% of the users either going to the second page of results, launching a new search or clicking on one of the side sponsored ads (this only accounted for small fraction of the clicks). In 2014, the organic results, including all the different category “chunks,” captured 74.6% of the remaining clicks. This leaves only 11% either clicking on the side ads (again, a tiny percentage) or either going to the second page or launching a new search. That means that Google has upped their first page success rate to an impressive 90%.

First of all, that means you really need to break onto the first page of results to gain any visibility at all. If you can’t do it organically, make sure you pay for presence. But secondly, it means that of all the clicks on the page, some type of organic result is capturing 84% of them. The trick is to know which type of organic result will capture the click – and to do that you need to know the user’s intent (see above). But you also need to optimize across your entire content portfolio. With my own blog, two of the biggest traffic referrers happen to be image searches.

Left Gets to Lead

The Left side of the results page has always been important but the evolution of scanning behaviors now makes it vital. The heat map below shows just how important it is to seed the left hand of results with information scent.

Googlelefthand

Last week, I talked about how the categorization of results had caused us to adopt a two stage scanning strategy, the first to determine which “chunks” of result categories are the best match to intent, and the second to evaluated the listings in the most relevant chunks. The vertical scan down the left hand of the page is where we decide which “chunks” of results are the most promising. And, in the second scan, because of the improved relevancy, we often make the decision to click without a lot of horizontal scanning to qualify our choice. Remember, we’re only spending a little over a second scanning the result before we click. This is just enough to pick up the barest whiffs of information scent, and almost all of the scent comes from the left side of the listing. Look at the three choices above that captured the majority of scanning and clicks. The search was for “home decor store toronto.” The first popular result was a local result for the well known brand Crate and Barrel. This reinforces how important brands can be if they show up on the left side of the result set. The second popular result was a website listing for another well known brand – The Pottery Barn. The third was a link to Yelp – a directory site that offered a choice of options. In all cases, the scent found in the far left of the result was enough to capture a click. There was almost no lateral scanning to the right. When crafting titles, snippets and metadata, make sure you stack information scent to the left.

In the end, there are no magic bullets from this latest glimpse into search behaviors. It still comes down to the five foundational planks that have always underpinned good search marketing:

  1. Understand your user’s intent
  2. Provide a rich portfolio of content and functionality aligned with those intents
  3. Ensure your content appears at or near the top of search results, either through organic optimization or well run search campaigns
  4. Provide relevant information scent to capture clicks
  5. Make sure you deliver on what you promise post-click

Sure, the game is a little more complex than it was 9 years ago, but the rules haven’t changed.

Two Views of the Promise of Technology

technologybrainIn the last two columns, I’ve looked at how technology may be making us intellectually lazy. The human brain tends to follow the path of least resistance and technology’s goal is to eliminate resistance. Last week, I cautioned that this may end up making us both more shallow in our thinking and more fickle in our social ties. We may become an attention deficit society, skipping across the surface of the world. But, this doesn’t necessarily have to be the case.

The debate is not a new one. Momentous technologies generally come complete with their own chorus of naysayers. Whether it’s the invention of writing, the printing press, electronic communication or digital media, the refrain is the same – this will be the end of the world as we know it. But if history has taught us anything, it’s that new technologies are seldom completely beneficial or harmful. Their lasting impact lies somewhere in the middle. With the good comes some bad.

The same will be true for the current digital technologies. The world will change, both for the positive and for the negative. The difference will come in how individuals use the technology. This will spread out along the inevitable bell curve.

watchingTVLook at television, for instance. A sociologist could make a pretty convincing case for the benefits of TV. A better understanding of the global community helped ease our xenophobic biases. Public demand lead to increased international pressure on repressive regimes. There was a sociological leveling that is still happening across cultures. Civil rights and sexual equality were propelled by the coverage they received. Atrocities still happen with far too much regularity, but I personally believe the world is a less savage and brutal place than it was 100 years ago, partially due to the spread of TV.

On the flip side, we have developed a certain laziness of spirit that is fed by TV’s never ending parade of entertainment to be passively consumed. We spend less time visiting our neighbors. We volunteer less. We’re less involved in our communities. Ironically, we’re  a more idealistic society but we make poorer neighbors.

The type of programming to be found on TV also shows that despite the passive nature of the medium, we didn’t become stupider en masse. Some of us use TV for enlightenment, and some of us use it to induce ourselves into a coma. At the end of the day, I think the positives and negatives of TV as a technology probably net out a little better than neutral.

I suspect the same thing is happening with digital media. Some of us are diving deeper and learning more than ever. Others are clicking their way through site after site of brain-porn. Perhaps there are universal effects that will show up over generations that will type the scale one way or the other, but we’re too early in the trend to see those yet. The fact is, digital technologies are not changing our brains in a vacuum. Our environment is also changing and perhaps our brains are just keeping up. The 13 year old who is frustrating the hell out of us today may be a much better match for the world 20 years from now.

I’ll wrap up by leaving three pieces of advice that seem to provide useful guides for getting the best out of new technologies.

First: A healthy curiosity is something we should never stop nurturing. In particular, I find it helpful to constantly ask “how” and “why.”

Second: Practice mindfulness. Be aware of your emotions and cognitive biases and recognize them for what they are. This will help you steer things back on track when they’re leading down an unhealthy path

Third: Move from consuming content to contributing something meaningful. The discipline of publishing tends to push you beyond the shallows.

If you embrace the potential of technology, you may still find yourself as an outlier, but technology has done much to allow a few outliers to make a huge difference.

Are Our Brains Trading Breadth for Depth?

ebrain1In last week’s column, I looked at how efficient our brains are. Essentially, if there’s a short cut to an end goal identified by the brain, it will find it. I explained how Google is eliminating the need for us to remember easily retrievable information. I also speculated about how our brains may be defaulting to an easier form of communication, such as texting rather than face-to-face communication.

Personally, I am not entirely pessimistic about the “Google Effect,” where we put less effort into memorizing information that can be easily retrieved on demand. This is an extension of Daniel Wegner’s “transactive memory”, and I would put it in the category of coping mechanisms. It makes no sense to expend brainpower on something that technology can do easier, faster and more reliably. As John Mallin commented, this is like using a calculator rather than memorizing times tables.

Reams of research has shown that our memories can be notoriously inaccurate. In this case, I partially disagree with Nicholas Carr. I don’t think Google is necessarily making us stupid. It may be freeing up the incredibly flexible power of our minds, giving us the opportunity to redefine what it means to be knowledgeable. Rather than a storehouse of random information, our minds may have the opportunity to become more creative integrators of available information. We may be able to expand our “meta-memory”, Wegner’s term for the layer of memory that keeps track of where to turn for certain kinds of knowledge. Our memory could become index of interesting concepts and useful resources, rather than ad-hoc scraps of knowledge.

Of course, this positive evolution of our brains is far from a given. And here Carr may have a point. There is a difference between “lazy” and “efficient.” Technology’s freeing up of the processing power of our brain is only a good thing if that power is then put to a higher purpose. Carr’s title, “The Shallows” is a warning that rather than freeing up our brains to dive deeper into new territory, technology may just give us the ability to skip across the surface of the titillating. Will we waste our extra time and cognitive power going from one piece of brain candy to the other, or will we invest it by sinking our teeth into something important and meaningful?

A historical perspective gives us little reason to be optimistic. We evolved to balance the efforts required to find food with the nutritional value we got from that food. It used to be damned hard to feed ourselves, so we developed preferences for high calorie, high fat foods that would go a long way once we found them. Thanks to technology, the only effort required today to get these foods is to pick them off the shelf and pay for them. We could have used technology to produce healthier and more nutritious foods, but market demands determined that we’d become an obese nation of junk food eaters. Will the same thing happen to our brains?

I am even more concerned with the short cuts that seem to be developing in our social networking activities. Typically, our social networks are built both from strong ties and weak ties. Mark Granovetter identified these two types of social ties in the 70’s. Strong ties bind us to family and close friends. Weak ties connect us with acquaintances. When we hit rough patches, as we inevitably do, we treat those ties very differently. Strong ties are typically much more resilient to adversity. When we hit the lowest points in our lives, it’s the strong ties we depend on to pull us through. Our lifelines are made up of strong ties. If we have a disagreement with someone with whom we have a strong tie, we work harder to resolve it. We have made large investments in these relationships, so we are reluctant to let them go. When there are disruptions in our strong tie network, there is a strong motivation to eliminate the disruption, rather than sacrifice the network.

Weak ties are a whole different matter. We have minimal emotional investments in these relationships. Typically, we connect with these either through serendipity or when we need something that only they can offer. For example, we typically reinstate our weak tie network when we’re on the hunt for a job. LinkedIn is the virtual embodiment of a weak tie network. And if we have a difference of opinion with someone to whom we’re weakly tied, we just shut down the connection. We have plenty of them so one more or less won’t make that much of a difference. When there are disruptions in our weak tie network, we just change the network, deactivating parts of it and reactivating others.

Weak ties are easily built. All we need is just one thing in common at one point in our lives. It could be working in the same company, serving on the same committee, living in the same neighborhood or attending the same convention. Then, we just need some way to remember them in the future. Strong ties are different. Strong ties develop over time, which means they evolve through shared experiences, both positive and negative. They also demand consistent communication, including painful communication that sometimes requires us to say we were wrong and we’re sorry. It’s the type of conversation that leaves you either emotionally drained or supercharged that is the stuff of strong ties. And a healthy percentage of these conversations should happen face-to-face. Could you build a strong tie relationship without ever meeting face-to-face? We’ve all heard examples, but I’d always place my bets on face-to-face – every time.

It’s the hard work of building strong ties that I fear we may miss as we build our relationships through online channels. I worry that the brain, given an easy choice and a hard choice, will naturally opt for the easy one. Online, our network of weak ties can grow beyond the inherent limits of our social inventory, known as Dunbar’s Number (which is 150, by the way). We could always find someone with which to spend a few minutes texting or chatting online. Then we can run off to the next one. We will skip across the surface of our social network, rather than invest the effort and time required to build strong ties. Just like our brains, our social connections may trade breadth for depth.

The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.