McLuhan 50 Years Later

First published December 20, 2012 in Mediapost’s Search Insider

My daughter, who is in her senior year of high school, recently wrote an essay on Marshall McLuhan. She asked me to give my thoughts on McLuhan’s theories of media. To be honest, I hadn’t given McLuhan much thought since my college days, when I had packed away “Understanding Media: The Extensions of Man” for what I thought would likely be forever. I always found the title ironic. This book does many things, but promoting “understanding” is not one of them. It’s one of the more incomprehensible texts I’ve ever encountered.

My daughter’s essay caused me to dig up my half-formed understanding of what McLuhan was trying to say. I also tried to update that understanding from the early ‘60s, when it was written, to a half-century later, in the world we currently live in.

Consider this passage from McLuhan, written exactly 50 years ago: The next medium, whatever it is—it may be the extension of consciousness—will include television as its content, not as its environment, and will transform television into an art form. A computer as a research and communication instrument could enhance retrieval, obsolesce mass library organization, retrieve the individual’s encyclopedic function and flip into a private line to speedily tailored data of a saleable kind.

(See, I told you it was incomprehensible!)

The key thing to understand here is that McLuhan foretold something that I believe is unfolding before our eyes: The media we interact with are changing our patterns of cognition – not the message, but the medium itself. We are changing how we think. And that, in turn, is changing our society. While we focus on the messages we receive, we fail to notice that the ways we receive those messages are changing everything we know, forever. Twitter, Facebook, Google, the xBox and Youtube – all are co-conspirators in a wholesale rewiring of our world.

Now, to borrow from McLuhan’s own terminology, no one in our Global Village could ignore the horrific unfolding of events in Connecticut last week. But the channels we received the content through also affected our intellectual and visceral connection with that content. Watching parents search desperately for their children on television was a very different experience from catching the latest CNN update delivered via my iPhone.

When we watched through “hot” media, we connected at an immediate and emotional level. When the message was delivered through “cool” media, we stood somewhat apart, framing the messaging and interpreting it, abstracted at some length from the sights and sounds of what was unfolding. Because of the emotional connection afforded by the “hot” media, the terror of Newtown was also our own.

McLuhan foretold this as well: Unless aware of this dynamic, we shall at once move into a phase of panic terrors, exactly befitting a small world of tribal drums, total interdependence, and superimposed co-existence. […] Terror is the normal state of any oral society, for in it everything affects everything all the time.

My daughter is graduating next June. The world she will inherit will bear little resemblance to the one I stepped into, fresh from my own graduation in 1979. It is smaller, faster, more connected and, in many ways, more terrifying. But, has the world changed as much as it seems, or is it just the way we perceive that world? And, in that perception, are we the ones unleashing the change?

The “Savanna” Hypothesis of Online Design

First published December 6, 2012 in Mediapost’s Search Insider

I’m currently reading a fascinating paper titled “Evolved Responses to Landscapes” by Gordon Orians and Judith Heerwagen that was written back in 1992. The objective was to see if humans have an evolved preference for an ideal habitat. The researchers called their hunch the Savanna Hypothesis, noting that because homo sapiens spent much of our evolutionary history on the plains of tropical Africa, we should have a natural affinity for this type of landscape.

Your typical savanna features some cover from vegetation and trees, but not too much, which would allow natural predators to advance unnoticed. The environment should offer enough lushness to indicate the presence of ample food and water. It should allow for easy mobility. And it should be visually intriguing, encouraging us to venture and explore our habitat.

Here’s a quote from the paper: “Landscapes that aid and encourage exploration, wayfinding and information processing should be more favored than landscapes that impede these needs.”

The researchers, after showing participants hundreds of pictures of different landscapes, found significant support for their hypothesis. Most of us have a preference for landscapes that resemble our evolutionary origin. And the younger we are, the more predictable the preference. With age, we tend to adapt to where we live and develop a preference for it.

In reading this study, I couldn’t help but equate it to Pirolli and Card’s Information Foraging Theory. The two PARC researchers said that the strategies we use to hunt for information in a hyperlinked digital format (such as a webpage) seem to correspond to evolved optimal foraging strategies used by many species, including humans back in our hunting and foraging days. If, as Pirolli and Card theorized, we borrow inherent strategies for foraging and adapt them for new purposes, like looking for information, why wouldn’t we also apply evolved environmental preferences to new experiences, like the design of a Web page?

Consider the description of an ideal habitat quoted above. We want to be able to quickly determine our navigation options, with just a teaser of things still to explore. We want open space, so we can quickly survey our options, but we also want the promise of abundant rewards, either in the form of food and sustenance — or, in the online case, information and utility. After all, what is a website but another environment to navigate?

I find the idea of creating a home page design that incorporates a liberal dose of intrigue and promise particularly compelling. In a physical space, such an invitation may take the form of a road or pathway curving behind some trees or over a gentle rise. Who can resist such an invitation to explore just a little further?

Why should we take the same approach with a home page or landing page? Orians and Heerwagen explain that we tend to “way-find” through new environments in three distinct stages: First, we quickly scan the environment to decide if it’s even worth exploring. Do we stay here or move on to another, more hospitable location? This very quick scan really frames all the interactions to take place after it. After this “go-no/go” scan, we then start surveying the environment to gather information and find the most promising path to take. The final phase — true engagement with our surroundings — is when we decide to stay put and get some things done.

Coincidentally (or not?), I have found users take a very similar approach to evaluating a webpage. We’ve even entrenched this behavior into a usability best practice we call the “3 Scan Rule.” The first scan is to determine the promise of the page. Is it visually appealing? Is it relevant? Is it user-friendly? All these questions should be able to be answered in one second or less. In fact, a study at Carleton University found that we can reliably judge the aesthetic appeal of a website in as short a span as 50 milliseconds. That’s less time than it takes to blink your eye.

The second scan is to determine the best path. This typically involves exploring the primary navigation options, scanning graphics and headings and quickly looking at bullet lists to determine how “rich” the page is. Is it relevant to our intent? Does it look like there’s sufficient content for us to invest our time? Are there compelling navigation options that offer us more? This scan should take no more than 10 seconds.

Finally, there’s the in-depth scan. It’s here where we more deeply engage with the content. This can take anywhere from several seconds to several minutes.

At this point, the connection between the inherently pleasing characteristics of the African savanna and a well-designed website is no more than a hypothesis on my part. But I have to admit: I find the concept intriguing, like a half-obscured pathway disappearing over a swell on the horizon, waiting to be explored.

Pursuing the Unlaunched Search

First published November 29, 2012 in Mediapost’s Search Insider

Google’s doing an experiment. Eight times a day, randomly, 150 people get an alert from their smartphone and Google asks them this question, “ What did you want to know recently?” The goal? To find out all the things you never thought to ask Google about.

This is a big step for Google. It moves search into a whole new arena. It’s shifting the paradigm from explicit searching to implicit searching. And that’s important for all of the following reasons:

Search is becoming more contextually sensitive. Mobile search is contextually sensitive search. If you have your calendar, your to-do list, your past activities and a host of other information all stored on a device that knows where you are, it becomes much easier to guess what you might be interested in. Let’s say, for example, that your calendar has “Date with Julie” entered at 7 p.m., and you’re downtown. In the past year, 57% of your “dates with Julie” have generally involved dinner and a movie. You usually spend between $50 and $85 dollars on dinner, and your movies of choice generally vacillate between rom-coms and action-adventures (depending on who gets to choose).

In this scenario, without waiting for you to ask, Google could probably be reasonably safe in suggesting local restaurants that match your preferences and price ranges, showing you any relevant specials or coupons, and giving you the line-up of suggested movies playing at local theatres. Oh, and by the way, you’re out of milk and it’s on sale at the grocery store on the way home.

Can Googling become implicit? “We’ve often said the perfect search engine will provide you with exactly what you need to know at exactly the right moment, potentially without you having to ask for it,” says Google Lead Experience Designer Jon Wiley, one of the leads of the research experiment.

As our devices know more about us, the act of Googling may move from a conscious act to a subliminal suggestion. The advantage, for Google and us, is that it can provide us with information we never thought to ask for.  In the ideal state envisioned by Google, it can read the cues of our current state and scour its index of information to provide relevant options. Let’s say we just bought a bookcase from Ikea. Without asking, Google can download the user’s manual and pull relevant posts from user support forums.

It ingrains the Google habit. Google is currently in the enviable position of having become a habit. We don’t think to use Google, we just do. Of course, habits can be broken. Habits are a subconscious script that plays out in a familiar environment, delivering an expected outcome without conscious intervention. To break a habit, you usually look at disrupting the environment, stopping the script before it has a chance to play out.

The environment of search is currently changing dramatically. This raises the possibility of the breaking of the Google habit. If our habits suddenly find themselves in unfamiliar territory, the regular scripts are blocked and we’re forced to think our way through the situation.

But if Google can adapt to unfamiliar environments and prompt us with relevant information without us having to give it any thought, the company not only preserves the Google habit but ingrains it even more deeply. Good news for Google, bad news for Bing and other competitors.

It expands Google’s online landscape. Finally, at this point, Google’s best opportunity for a sustainable revenue channel is to monetize search. As long as Google controls our primary engagement point with online information, it has no shortage of monetization opportunities. By moving away from waiting for a query and toward proactive serving of information, Google can exponentially expand the number of potential touch points with users. Each of these  touch points comes with another advertising opportunity.

All this is potentially ground-breaking, but it’s not new. Microsoft was talking about Implicit Querying a decade ago. It was supposed to be built into Windows Vista. At that time, it was bound to the desktop. But now, in a more mobile world, the implications of implicit searching are potentially massive.

The Balancing of Market Information

First published October 25, 2012 in Mediapost’s Search Insider

In my three previous columns on disintermediation, I made a rather large assumption: that the market will continue to see a balancing of information available both to buyers and sellers. As this information becomes more available, the need for the “middle” will decrease.

Information Asymmetry Defined

Let’s begin by exploring the concept of information asymmetry, courtesy of George Akerlof, Michael Spence and Joseph Stiglitz.  In markets where access to information is unbalanced, bad things can happen.

If the buyer has more information than the seller, then we can have something called adverse selection. Take life and health insurance, for example. Smokers (on the average) get sick more often and die younger than non-smokers. If an insurance company has 50% of policyholders who are smokers, and 50% who aren’t, but the company is not allowed to know which is which, it has a problem with adverse selection. It will lose money on the smokers so it will increase rates across the board. The problem is that non-smokers, who don’t use insurance as much, will get angry and may cancel their policy. This will mean the “book of business” will become even less profitable, driving rates even higher.   The solution, which we all know, is simple: Ask policy applicants if they smoke. Imperfect information is thus balanced out.

If the seller has more information than the buyer, then we have a “market for lemons” (the name of Akerlof’s paper). Here,  buyers are  assuming risk in a purchase without knowingly accepting that risk, because they’re unaware of the problems that the seller knows exists. Think about buying a used car, without the benefit of an inspection, past maintenance records or any type of independent certification. All you know is what you can see by looking at the car on the lot. The seller, on the other hand, knows the exact mechanical condition of the car. This factor tends to drive down the prices of all products –even the good ones — in the market, because buyers assume quality will be suspect. The balancing of information in this case helps eliminates the lemons and has the long-term effect of improving the average quality of all products on the market.

Getting to Know You…

These two forces — the need for sellers to know more about their buyers, and the need for buyers to know more about what they’re buying — are driving a tremendous amount of information-gathering and dissemination. On the seller’s side, behavioral tracking and customer screening are giving companies an intimate glimpse into our personal lives. On the buyer’s side, access to consumer reviews, third-party evaluations and buyer forums are helping us steer clear of lemons. Both are being facilitated through technology.

But how does disintermediation impact information asymmetry, or vice versa?

If we didn’t have adequate information, we needed some other safeguard against being taken advantage of. So, failing a rational answer to this particular market dilemma, we found an irrational one: We relied on gut instinct.

Relying on Relationships

If we had to place our trust in someone, it had to be someone we could look in the eye during the transaction. The middle was composed of individuals who acted as the face of the market. Because they lived in the same communities as their customers, went to the same churches, and had kids that went to the same schools, they had to respect their markets. If they didn’t, they’d be run out of town. Often, their loyalties were also in the middle, balanced somewhere between their suppliers and their customers.

In the absence of perfect information, we relied on relationships. Now, as information improves, we still want relationships, because that’s what we’ve come to expect. We want the best of both worlds.

A Decade with the Database of Intentions

First published September 27, 2012 in Mediapost’s Search Insider

It’s been over 10 years since John Battelle first started considering what he called the “Database of intentions.” It was, and is:

The aggregate results of every search ever entered, every result list ever tendered, and every path taken as a result. It lives in many places, but three or four places in particular hold a massive amount of this data (ie MSN, Google, and Yahoo). This information represents, in aggregate form, a place holder for the intentions of humankind – a massive database of desires, needs, wants, and likes that can be discovered, supoenaed, archived, tracked, and exploited to all sorts of ends. Such a beast has never before existed in the history of culture, but is almost guaranteed to grow exponentially from this day forward. This artifact can tell us extraordinary things about who we are and what we want as a culture. And it has the potential to be abused in equally extraordinary fashion.

When Battelle considered the implications, it overwhelmed him. “Once I grokked this idea (late 2001/early 2002), my head began to hurt.” Yet, for all its promise, marketers have only marginally leveraged the Database of Intentions.

In the intervening time, the possibilities of the Database of Intention have not diminished. In fact, they have grown exponentially:

My mistake in 2003 was to assume that the entire Database of Intentions was created through our interactions with traditional web search. I no longer believe this to be true. In the past five or so years, we’ve seen “eruptions” of entirely new fields, each of which, I believe, represent equally powerful signals – oxygen flows around which massive ecosystems are already developing. In fact, the interplay of all of these signals (plus future ones) represents no less than the sum of our economic and cultural potential.

Sharing Battelle’s predilection for “Holy Sh*t” moments, a post by MediaPost’s Laurie Sullivan this Tuesday got me thinking again about Battelle’s “DBoI.” A recent study by Google and EA showed that using search data can predict 84% of video game sales.  But the data used in the prediction is only scratching the surface of what’s possible. Adam Stewart from Google hints at what might be possible, “Aside from searches, Google plans to build in game quality, TV investment, online display investment, and social buzz to create a multivariate model for future analysis.”

This is very doable stuff. All we need to create predictive models that match (and probably far exceed) the degree of accuracy already available. The data is just sitting there, waiting to be interpreted. The implications for marketing are staggering, but to Battelle’s point, let’s not be too quick to corral this simply for the use of marketers. The DBoI has implications that reach into every aspect of our society and lives. This is big — really big! If that sounds unduly ominous to you, let me give you a few reasons why you should be more worried than you are.

Typically, if we were to predict patterns in human behavior, there would be two sources of signals. One comes from an understanding of how humans act. As we speak, this is being attacked on multiple fronts. Neuroscience, behavioral economics, evolutionary psychology and a number of other disciplines are rapidly converging on a vastly improved understanding of what makes us tick. From this base understanding, we can then derive hypotheses of predicted behaviors in any number of circumstances.

This brings us to the other source of behavior signals. If we have a hypothesis, we need some way to scientifically test it. Large-scale collections of human behavioral data allow us to search for patterns and identify underlying causes, which can then serve as predictive signals for future scenarios. The Database of Intentions gives us a massive source of behavior signals that capture every dimension of societal activity. We can test our hypotheses quickly and accurately against the tableau of all online activity, looking for the underlying influences that drive behaviors.

At the intersection of these two is something of tremendous import. We can start predicting human behavior on a massive scale, with unprecedented accuracy. With each prediction, the feedback loop between qualitative prediction and quantitative verification becomes faster and more efficient. Throw a little processing power at it and we suddenly have an artificially intelligent, self-ssimproving predictive model that will tell us, with startling accuracy, what we’re likely to do in the future.

This ain’t just about selling video games, people. This is a much, much, much bigger deal.

A Look at the Future through Google Glasses?

First published June 7, 2012 in Mediapost’s Search Insider

“A wealth of information creates a poverty of attention.” — Herbert Simon

Last week, I explored the dark recesses of the hyper-secret Google X project.  Two X Projects in particular seem poised to change our world in very fundamental ways: Google’s Project Glass and the “Web of Things.”

Let’s start with Project Glass. In a video entitled “One Day…,” the future seen through the rose-colored hue of Google Glasses seems utopian, to say the least. In the video, we step into the starring role, strolling through our lives while our connected Google Glasses feed us a steady stream of information and communication — a real-time connection between our physical world and the virtual one.

In theory, this seems amazing. Who wouldn’t want to have the world’s sum total of information available instantly, just a flick of the eye away?

Couple this with the “Web of Things,” another project said to be in the Google X portfolio.  In the Web of Things, everything is connected digitally. Wearable technology, smart appliances, instantly findable objects — our world becomes a completely inventoried, categorized and communicative environment.

Information architecture expert Peter Morville explored this in his book “Ambient Findability.”  But he cautions that perhaps things may not be as rosy as you might think after drinking the Google X Kool-Aid. This excerpt is from a post he wrote on Ambient Findability:  “As information becomes increasingly disembodied and pervasive, we run the risk of losing our sense of wonder at the richness of human communication.”

And this brings us back to the Herbert Simon quote — knowing and thinking are not the same thing. Our brains were not built on the assumption that all the information we need is instantly accessible. And, if that does become the case through advances in technology, it’s not at all clear what the impact on our ability to think might be. Nicholas Carr, for one, believes that the Internet may have the long-term effect of actually making us less intelligent. And there’s empirical evidence he might be right.

In his book “Thinking, Fast and Slow,”Noble laureate Daniel Kahneman says that while we have the ability to make intuitive decisions in milliseconds (Malcolm Gladwell explored this in “Blink”), humans also have a nasty habit of using these “fast” mental shortcuts too often, relying on gut calls that are often wrong (or, at the very least, biased) when we should be using the more effortful “slow” and rational capabilities that tend to live in the frontal part of our brain. We rely on beliefs, instincts and habits, at the expense of thinking. Call it informational instant gratification.

Kahneman recounts a seminal study in psychology, where four-year-old children were given a choice: they could have one Oreo immediately, or wait 15 minutes (in a room with the offered Oreo in front of them, with no other distractions) and have two Oreos. About half of the children managed to wait the 15 minutes. But it was the follow-up study, where the researchers followed what happened to the children 10 to 15 years later, that yielded the fascinating finding:

“A large gap had opened between those who had resisted temptation and those who had not. The resisters had higher measures of executive control in cognitive tasks, and especially the ability to reallocate their attention effectively. As young adults, they were less likely to take drugs. A significant difference in intellectual aptitude emerged: the children who had shown more self-control as four year olds had substantially higher scores on tests of intelligence.”

If this is true for Oreos, might it also be true for information? If we become a society that expects to have all things at our fingertips, will we lose the “executive control” required to actually think about things? Wouldn’t it be ironic if Google, in fulfilling its mission to “organize the world’s information” inadvertently transgressed against its other mission, “don’t be evil,” by making us all attention-deficit, intellectual-diminished, morally bankrupt dough heads?

The “Field of Dreams” Dilemma

First published May 3, 2012 in Mediapost’s Search Insider

There’s a chicken and an egg paradox in mobile marketing. Many mobile sites sit moldering in the online wilderness, attracting few to no visitors. The same could be said for many elaborate online customer portals, social media outposts or online communities. Somebody went to the trouble to build them, but no one came. Why?

Well, it could be because no one thinks to go to the trouble to look for them, just as no one expects to find a ball diamond in the middle of an Iowa cornfield. It wasn’t until the ghosts of eight Chicago White Sox players, banned for life from playing the game they loved, started playing on the “Field of Dreams” that anyone bothered to drive to Ray Kinsella’s farm.  There was suddenly a reason to go.

The problem with many out-of–the-way online destinations is that there is no good reason to go. Because of this, we make two assumptions:

–       If there is no good reason for a destination to exist, then the destination probably doesn’t exist. Or,

–       If it does exist, it will be a waste of time and energy to visit.

If we jump to either of these two conclusions, we don’t bother looking for the destination. We won’t make the investment required to explore and evaluate. You see, there is a built-in mechanism that makes a “Build it and they will come” strategy a risky bet.

This built-in mechanism comes from behavioral ecology and is called the “marginal value theorem.” It was first identified by Eric Charnov in 1976 and has since been borrowed to explain behaviors in online information foraging by Peter Pirolli, amongst others. The idea behind it is simple: We will only invest the time and effort to find a new “patch” of online information if we think it’s better than “patches” we already know exist and are easy to navigate to.  In other words, we’re pretty lazy and won’t make any unnecessary trips.

This cost/benefit calculation is done largely at a subconscious level and will dictate our online behaviors. It’s not that we make a conscious decision not to look for new mobile sites or social destinations. But unbeknownst to us, our brain is already passing value judgments that will tend to keep us going down well-worn paths. So, if we are looking for information or functionality that would be unlikely to find in a mobile site or app, but we know of a website that has just what we’re looking for and time is not a urgent matter, we’ll wait until we’re in front of our regular computer to do the research. We automatically disqualify the mobile opportunity because our “marginal value” threshold has not been met.

The same is true for social sites. If we believe that there is a compelling reason to seek out a Facebook page (promotional offers, information not available elsewhere) then we’ll go to the trouble to track it down. Otherwise, we’ll stick to destinations we know.

I believe the marginal value theorem plays an important role in defining the scope of our online worlds. We only explore new territory when we feel our needs won’t be met by destinations we already know and are comfortable with.  And if we rule out entire categories of content or functionality as being unlikely to adapt well to a mobile or social environment (B2B research in complex sales scenarios being one example) then we won’t go to the trouble to look for them.

I should finish off by saying that this is a moving target. Once there is enough critical mass in new online territory to reset visitor expectations, you’ve increased the “richness” of the patch to the point where the “marginal value” conditions are met and the brain decides it’s worth a small investment of time and energy.

In other words, if Shoeless Joe Jackson, Chick Gandil, Eddie Cicotte, Lefty Williams, Happy Felsch, Swede Risberg, Buck Weaver and Fred McMullin all start playing baseball in a cornfield, than it’s probably worth hopping on the tractor and head’n over to the Kinsella place!

Search and the Age of “Usefulness”

First published April 19, 2012 in Mediapost’s Search Insider

There has been a lot of digital ink spilled over the recent changes to Google’s algorithm and what it means for the SEO industry. This is not the first time the death knell has been rung for SEO. It seems to have more lives than your average barnyard cat. But there’s no doubt that Google’s recent changes throws a rather large wrench in the industry as a whole. In my view, that’s a good thing.

First of all, from the perspective of the user, Google’s changes mark an evolution of search beyond a tool used to search for information to one used by us to do the things we want to do. It’s moving from using relevance as the sole measure of success to incorporating usefulness.

The algorithm is changing to keep pace with the changes in the Web as a whole. No longer is it just the world’s biggest repository of text-based information; it’s now a living, interactive, functional network of apps, data and information, extending our capabilities through a variety of connected devices.

Google had to introduce these back-end changes. Not to do so would have guaranteed the company would have soon become irrelevant in the online world.

As Google succeeds in consistently interpreting more and more signals of user intent, it can become more confident in presenting a differentiated user experience. It can serve a different type of results set to a query that’s obviously initiated by someone looking for information than it does to the user who’s looking to do something online.

We’ve been talking about the death of the monolithic set of search results for years now. In truth, it never died; it just faded away, pixel by pixel. The change has been gradual, but for the first time in several years of observing search, I can truthfully say that my search experience (whether on Google, Bing or the other competitors) looks significantly different today than it did three years ago.

As search changes, so do the expectations of users. And that affects the “use case” of search. In its previous incarnation, we accepted that search was one of a number of necessary intermediate steps between our intent and our ultimate action. If we wanted to do something, we accepted the fact that we would search for information, find the information, evaluate the information and then, eventually, take the information and do something with it. The limitations of the Web forced us to take several steps to get us where we wanted to go.

But now, as we can do more of what we want to online, the steps are being eliminated. Information and functionality are often seamlessly integrated in a single destination. So we have less patience with seemingly superfluous steps between us and our destination. That includes search.

Soon, we will no longer be content with considering the search results page as a sort of index to online content. We will want the functionality we know exists served to us via the shortest possible path. We see this beginning as answers to common information requests are pushed to the top of the search results page.

What this does, in terms of user experience, is make the transition from search page to destination more critical than ever. As long as search was a reference index, the user expected to bounce back and forth between potential destinations, deciding which was the best match. But as search gets better at unearthing useful destinations, our “post-click” expectations will rise accordingly.  Whatever lies on the other side of that search click better be good. The changes in Google’s algorithm are the first step (of several yet to come) to ensure that it is.

What this does for SEO specialists is to suddenly push them toward considering a much bigger picture than they previously had to worry about. They have to think in terms of a search user’s unique intent and expectations. They have to understand the importance of the transition from a search page to a landing page and the functionality that has to offer. And, most of all, they have to counsel their clients on the increasing importance of “usefulness” — and how potential customers will use online to seek and connect to that usefulness.  If the SEO community can transition to that role, there will always be a need for them.

The SEO industry and the Google search quality team have been playing a game of cat and mouse for several years now. It’s been more “hacking” than “marketing” as SEO practitioners prod for loopholes in the Google algorithm. All too often, a top ranking was the end goal, with no thought to what that actually meant for true connections with prospects.

In my mind, if that changes, it’s perhaps the best thing to ever happen in the SEO business.

As We May Remember

First published January 12, 2012 in Mediapost’s Search Insider

In his famous Atlantic Monthly essay “As We May Think,” published in July 1945, Vannevar Bush forecast a mechanized extension to our memory that he called a “memex”:

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, “memex” will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

Last week, I asked you to ponder what our memories might become now that Google puts vast heaps of information just one click away. And ponder you did:

I have to ask, WHY do you state, “This throws a massive technological wrench into the machinery of our own memories,” inferring something negative??? Might this be a totally LIBERATING situation? – Rick Short, Indium Corporation

Perhaps, much like using dictionaries in grade school helped us to learn and remember new information, Google is doing the same? Each time we “google” and learn something new aren’t we actually adding to our knowledge base in some way? – Lester Bryant III

Finally, I ran across this. Our old friend Daniel Wegner (transactive memory) and colleagues Betsy Sparrow and Jenny Liu from Columbia University actually did research on this very topic this past year. It appears from the study that our brains are already adapting to having Internet search as a memory crutch. Participants were less likely to remember information they looked up online when they knew they could access it again at any time. Also, if they looked up information that they knew they could remember, they were less likely to remember where they found it. But if the information was determined to be difficult to remember, the participants were more likely to remember where they found it, so they could navigate there again.

The beautiful thing about our capacity to remember things is that it’s highly elastic. It’s not restricted to one type of information. It will naturally adapt to new challenges and requirements. As many rightly commented on last week’s column, the advent of Google may introduce an entirely new application of memory — one that unleashes our capabilities rather than restricts them. Let me give you an example.

If I had written last week’s column in 1987, before the age of Internet Search, I would have been very hesitant to use the references I did: the Transactive Memory Hypothesis of Daniel Wegner, and the scene from “Annie Hall.”  That’s because I couldn’t remember them that well. I knew (or thought I knew) what the general gist was, but I had to search them out to reacquaint myself with the specific details of each. I used Google in both cases, but I was already pretty sure that Wikipedia would have a good overview of transactive memory and that Youtube would have the clip in question. Sure enough, both those destinations topped the results that Google brought back. So, my search for transactive memory utilized my own transactive memorizations. The same was true, by the way, for my reference to Vannevar Bush at the opening of this column.

By knowing what type of information I was likely to find, and where I was likely to find it, I could check the references to ensure they were relevant and summarize what I quickly researched in order to make my point. All I had to do was remember high-level summations of concepts, rather than the level of detail required to use them in a meaningful manner.

One of my favorite concepts is the idea of consilience – literally, the “jumping together” of knowledge. I believe one of the greatest gifts of the digitization of information is the driving of consilience. We can now “graze” across multiple disciplines without having to dive too deep in any one, and pull together something useful — and occasionally amazing. Deep dives are now possible “on demand.” Might our memories adapt to become consilience orchestrators, able to quickly sift through the sum of our experience and gather together relevant scraps of memory to form the framework of new thoughts and approaches?

I hope so, because I find this potential quite amazing.

Is Google Replacing Memory?

First published on January 5, 2012 in Mediapost’s Search Insider

“How old is Tony Bennett anyway?”

We were sitting in a condo on a ski hill with friends, counting down to the new year, when the ageless Mr. Bennett appeared on TV. One of us wondered aloud about just how many new years he has personally ushered in.

In days gone by, the question would have just hung there. It would probably have  filled up a few minutes of conversation. If someone felt strongly about the topic, it might even have started an argument. But, at the end of it all, there would be no definitive answer — just opinions.

This was the way of the world. We were restricted to the knowledge we could each jam in our noggin. And if our opinion conflicted with another’s, all we could do is argue.

In “Annie Hall, “ Woody Allen set up the scenario perfectly. He and Diane Keaton are in a movie line. Behind them, an intellectual blowhard is in mid-stream pontification on everything from Fellini’s movie-making to the media theories of Marshall McLuhan. Finally, Allen can take it no more and asks the camera “What do you do with a guy like this?” The “guy” takes exception and explains to Allen that he teaches a course on McLuhan at Columbia. But Allen has the last laugh — literally. He pulls the real Marshall McLuhan out from behind an in-lobby display, and McLuhan proceeds to intellectually eviscerate the Columbia professor.

“If only life was actually like this,” Allen sighs to the camera.

Well, now, some 35 years later, it may be. While we may not have Marshall McLuhan in our back pocket, we do have Google. And for many questions, Google is the final arbitrator. Opinions quickly give way to facts (or, at least, information presented as fact online.) No longer do we have to wonder how old Tony Bennett really is. Now, we can quickly check the answer.

If you stop to think about this, it has massive implications.

In 1985, Daniel Wegner proposed something along these lines when he introduced the hypothetical concept of transactive memory. An extension of “group mind,” transactive memory posits a type of meta-memory, where our own capacity to remember things is enhanced in a group by knowing whom in that group knows more than we do about any given topic.

In its simplest form, transactive memory is my knowing that my wife tends to remember birthdays and anniversaries — but I remember when to pay our utility bills. It’s not that I can’t remember birthdays and my wife can’t remember to pay bills, it’s just that we don’t have to go to the extra effort if we know our partner has it covered.

If Wegner’s hypothesis is correct (and it certainly passes my own smell test) then transactive memory has been around for a long time. In fact, many believe that the acquisition of language, which allowed for the development of transactive memory and other aids to survival in our ancestral tribes, was probably responsible for the “Great Leap Forward” in our own evolution.

But with ubiquitous access to online knowledge, transactive memory takes on a whole new spin. Now, not only don’t we have to remember as much as we used to, we don’t even have to remember who else might have the answer. For much of what we need to know, it’s as simple as searching for it on our smartphone.  Our search engine of choice does the heavy lifting for us.

This throws a massive technological wrench into the machinery of our own memories. Much of what it was originally intended for may no longer be required.  And this begs the question, “If we no longer have to remember stuff we can just look up online, what will we use our memory for?”

Something to ponder at the beginning of a new year.

Oh, and in case you’re wondering, Anthony Dominick Benedetto was born Aug. 3, 1926, making him 85.