The Psychology of Usefulness: How Our Brains Judge What is Useful

To-Do-ListDid you know that “task” and “tax” have the same linguistic roots? They both come from the Latin “taxare” – meaning to appraise. This could explain the lack of enthusiasm we have for both.

Tasks are what I referred to in the last post as an exotelic activity – something we have to do to reach an objective that carries no inherent reward. We do them because we have to do them, not because we want to do them.

When we undertake a task, we want to find the most efficient way to get it done. Usefulness becomes a key criterion. And when we judge usefulness, there are some time-tested procedures the brain uses.

Stored Procedures and Habits

The first question our brain asks when undertaking a task is – have we done this before? Let’s first deal with what happens if the answer is yes:

If we’ve done something before our brains – very quickly and at a subconscious level – asks a number of qualifying questions:

–       How often have we done this?

–       Does the context in which the task plays out remain fairly consistent (i.e. are we dealing with a stable environment)?

–       How successful have we been in carrying out this task in the past

If we’ve done a task a number of times in a stable environment with successful outcomes, it’s probably become a habit. The habit chunk is retrieved from the basal ganglia and plays out without much in the way of rational mediation. Our brain handles the task on autopilot.

If we have less familiarity with the task, or if there’s less stability in the environment, but have done it before we probably have stored procedures, which are set procedural alternatives. These require more in the way of conscious guidance and often have decision points where we have to determine what we do next, based on the results of the previous action.

If we’re entering new territory and can’t draw on past experience, our brains have to get ready to go to work. This is the route least preferred by our brain. It only goes here when there’s no alternative.

Judging Expected Utility and Perceived Risk

If a task requires us to go into unfamiliar territory, there are new routines that the brain must perform. Basically, the brain must place a mental bet on the best path to take, balancing a prediction of a satisfactory outcome against the resources required to complete the task. Psychologists call this “Expected Utility.”

Expected Utility is the brain’s attempt to forecast scenarios that require the balancing of risks and rewards where the outcomes are not known.  The amount of processing invested by the brain is usually tied to the size of the potential risk and reward. Low risk/reward scenarios require less rationalization. The brain drives this balance by using either positive or negative emotional valences, interpreted by us as either anticipation or anxiety. Our emotional balance correlates with the degree of risk or reward.

Expected utility is more commonly applied in financial decision and game theory. In the case of conducting a task, there is usually no monetary element to risk and reward. What we’re risking is our own resources – time and effort. Because these are long established evolved resources, it’s reasonable to assume that we have developed subconscious routines to determine how much effort to expend in return for a possible gain. This would mean that these cognitive evaluations and calculations may happen at a largely subconscious level, or at least, more subconscious than the processing that would happen in evaluating financial gambles or those involving higher degrees of risk and reward.  In that context, it might make sense to look at how we approach another required task – finding food.

Optimal Foraging and Marginal Value

Where we balance gain against expenditure of time and effort, the brain has some highly evolved routines that have developed over our history. The oldest of these would be how we forage for food. But, we also have a knack of borrowing strategies developed for other purposes and using them in new situations.

Pirolli and Card (1999) found, for instance, that we use our food foraging strategies to navigate digital information. Like food, information online tends to be “patchy” and of varying value to us. Often, just like looking for a food source, we have to forage for information by judging the quality of hyperlinks that may take us to those information sources or “patches.” Pirolli and Card called these clues to the quality of information that may lie on the other end of links information scent.

Cartoon_foraging_theoryTied with this foraging strategy is the concept of Marginal Value.  This was first proposed by Eric Charnov in 1976 as a evolved strategy for determining how much time to spend in a food patch before deciding to move on. In a situation with diminishing returns (ie depleted food supplies) the brain must balance effort expended against return. If you happen on a berry bush in the wild, with a reasonable certainty that there are other bushes nearby (perhaps you can see them just a few steps away) you have to mentally solve the following equation – how many berries can be gathered with a reasonable expenditure of effort vs. how much effort would it take to walk to the next bush and how many berries would be available there?

This is somewhat analogous to information foraging, with one key difference. Information isn’t depleted as you consume it. So the rule of diminishing returns is less relevant. But if, as I suspect, we’ve borrowed these subconscious strategies for judging usefulness – both in terms of information and functionality – in online environment, our brains may not know or care about the subtle differences in environments.

The reason why we may not be that rational in the application of these strategies in online encounters is that they play out below the threshold of consciousness. We are not constantly and consciously adjusting our marginal value algorithm or quantifiably assessing the value of an information patch. No, our brains use a quicker and more heuristic method to mediate our output of effort – emotions. Frustration and anxiety tell us it’s time to move onto the next site or application. Feelings of reward and satisfaction indicate we should stay right where we are. The remarkable thing about this is that as quick and dirty as these emotional guidelines are, if you went to the trouble of rationally quantifying the potential of all possible alternatives, using a Bayesian approach, for instance, you’d probably find you ended up in pretty much the same place. These strategies, simmering below the surface of our consciousness, are pretty damn accurate!

So, to sum up this post, when judging the most useful way to get a task done, we have an evaluation cascade that happens very quickly in our brain:

  • If a very familiar task needs to be done in a stable environment, our habits will take over and it will be executed with little or no rational thought.
  • If the task is fairly familiar but requires some conscious guidance, we’ll retrieve a stored procedure and look for successful feedback as we work through it.
  • If a task is relatively new to us, we’ll forage through alternatives for the best way to do it, using evolved biological strategies to help balance risk (in terms of expended effort) against reward.

Now, to return to our original question, how does this evaluation cascade impact long and short-term user loyalty? I’ll return to this question in my next post.

Google Holds the Right Cards for a Horizontal Market

First published January 9, 2014 in Mediapost’s Search Insider

android_trhoneFunctionality builds up, then across. That was the principle of emerging markets that I talked about in last week’s column. Up – then across – breaking down siloes into a more open, competitive and transparent market. I’ll come back here in a moment.

I also talked about how Google + might be defining a new way of thinking about social networking, one free of dependence on destinations. It could create a social lens through which all our online activity passes through, adding functionality and enriching information.

Finally, this week, I read that Google is pushing hard to extend Android as the default operating system in the Open Automotive Alliance – turning cars into really big mobile devices. This builds on Android’s dominance in the smartphone market (with an 82% market share).

See a theme here?

For years, I’ve been talking about the day when search transitions from being a destination to a utility, powering apps which provide very specific functionality that far outstrips anything you could do on a “one size fits all” search portal. This was a good news/bad news scenario for Google, who was the obvious choice to provide this search grid. But, in doing so, they lose their sole right to monetize search traffic, a serious challenge to their primary income source. However, if you piggy back that search functionality onto the de facto operating system that powers all those apps, and then add a highly functional social graph, you have all the makings of a foundation that will support the ‘horizontalization” of the mobile connected market. Put this in place, and revenue opportunities will begin falling into your lap.

The writing is plainly on the wall here. The future is all about mobile connections. It is the foundation of the Web of Things, wearable technology, mobile commerce – anything and everything we see coming down the pipe.  The stakes are massive. And, as markets turn horizontal in the inevitable maturation phase to come, Google seems to be well on their way to creating the required foundations for that market.

Let’s spend a little time looking at how powerful this position might be for Google. Microsoft is still coasting on their success in creating a foundation for the desktop, 30 years later.  The fact that they still exist at all is testament to the power of Windows. But the desktop expansion that happened was reliant on just one device – the PC. And, the adoption curve for the PC took two decades to materialize, due to two things: the prerequisite of a fairly hefty investment in hardware and a relatively steep learning curve. The mobile adoption curve, already the fastest in history, has no such hurdles to clear. Relative entry price points are a fraction of what was required for PCs. Also, the learning curve is minimal. Mobile connectivity will leave the adoption curve of PCs in the dust.

In addition, an explosion of connected devices will propel the spread of mobile connectivity. This is not just about smart phones. Two of the biggest disruptive waves in the next 10 years will be wearable technologies and the Web of Things. Both of these will rely on the same foundations, an open and standardized operating system and the ability to access and share data. At the user interface level, the enhancements of powerful search technologies and social-graph enabled filters will significantly improve the functionality of these devices as they interface with the “cloud.”

In the hand that will have to inevitably be played, it seems that Google is currently holding all the right cards.

Our Brain on Books

Brain-on-BooksHere’s another neuroscanning study out of Emory University showing the power of a story.

Lead researcher Gregory Burns and his team wanted to “understand how stories get into your brain, and what they do to it.” Their findings seem to indicate that stories, in this case a historical fiction novel about Pompeii, caused a number of changes in the participants brain, at least in the short term. Over time, some of these changes decayed, but more research is required to determine how long lasting the changes are.

One would expect reading to alter related parts of the brain and this was true in the Emory study. The left temporal cortex, a section of the brain that handles language reception and interpretation showed signs of heightened connectivity for a period of time after reading the novel. This is almost like the residual effects of exercise on a muscle, which responds favorably to usage.

What was interesting, however, was that the team also saw increased connectivity in the areas of the brain that control representations of sensation for the body. This relates to Antonio Damasio’s “Embodied Semantics” theory where the reading of metaphors, especially those relating specifically to tactile images, activate the same parts of the brain that control the corresponding physical activity. The Emory study (and Damasio’s work) seems to show that if you read a novel that depicts physical activity, such as running through the streets of Pompeii as Vesuvius erupts, your brain is firing the same neurons as it would if you were actually doing it!

There are a number of interesting aspects to consider here, but what struck me is the multi-prong impact a story has on us. Let’s run through them:

Narratives have been shown to be tremendously influential frameworks for us to learn and update our sense of the world, including our own belief networks. Books have been a tremendously effect agent for meme transference and propagation. The structure of a story allows us to grasp concepts quickly, but also reinforces those concepts because it engages our brain in a way that a simple recital of facts could not. We relate to protagonists and see the world through their eyes. All our socially tuned, empathetic abilities kick into action when we read a story, helping to embed new information more fully. Reading a story helps shape our world view.

Reading exercises the language centers of our brain, heightening the neural connectivity and improving the effectiveness. Neurologists call this “shadow activity” – a concept similar to muscle memory.

Reading about physical activity fires the same neurons that we would use to do the actual activity. So, if you read an action thriller, even through you’re lying flat on a sofa, your brain thinks you’re the one racing a motorcycle through the streets of Istanbul and battling your arch nemesis on the rooftops of Rome. While it might not do much to improve muscle tone, it does begin to create neural pathways. It’s the same concept of visualization used by Olympic athletes.

For Future Consideration

As we learn more about the underlying neural activity of story reading, I wonder how we can use this to benefit ourselves? The biggest question I have is if a story in written form has this capacity to impact us at all the aforementioned levels, what would  more sense-engaged media like television or video games do? If reading about a physical activity tricks the brain into firing the corresponding sensory controlling neurons, what would happen if we are simulating that activity on an action controlled gaming system like Microsoft’s X Box? My guess would be that the sensory motor connections would obviously be much more active (because we’re physically active). Unfortunately, research in the area of embodied semantics is still at an early stage, so many of the questions have yet to be answered.

However, if our stories are conveyed through a more engaging sensory experience, with full visuals and sound, do we lose some opportunity for abstract analysis? The parts of our brain we use to read depend on relatively slow processing loops. I believe much of the power of reading lies in the requirements it places on our imagination to fill in the sensory blanks. When we read about a scene in Pompeii we have to create the visuals, the soundtrack and the tactile responses. In all this required rendering, does it more fully engage our sense-making capabilities, giving us more time to interpret and absorb?

What Does Being “Online” Mean?

plugged-inFirst published October 24, 2013 in Mediapost’s Search Insider

If readers’ responses to my few columns about Google’s Glass can be considered a representative sample (which, for many reasons, it can’t, but let’s put that aside for the moment), it appears we’re circling the concept warily. There’s good reason for this. Privacy concerns aside, we’re breaking virgin territory here that may shift what it means to be online.

Up until now, the concept of online had a lot in common with our understanding of physical travel and acquisition. As Peter Pirolli and Stuart Card discovered, our virtual travels tapped into our evolved strategies for hunting and gathering. The analogy, which holds up in most instances, is that we traveled to a destination. We “went” online, to “go” to a website, where we “got” information. It was, in our minds, much like a virtual shopping trip. Our vehicle just happened to be whatever piece of technology we were using to navigate the virtual landscape of “online.”

As long as we framed our online experiences in this way, we had the comfort of knowing we were somewhat separate from whatever “online” was. Yes, it was morphing faster than we could keep up with, but it was under our control, subject to our intent. We chose when we stepped from our real lives into our virtual ones, and the boundaries between the two were fairly distinct.

There’s a certain peace of mind in this. We don’t mind the idea of online as long as it’s a resource subject to our whims. Ultimately, it’s been our choice whether we “go” online or not, just as it’s our choice to “go” to the grocery store, or the library, or our cousin’s wedding. The sphere of our lives, as defined by our consciousness, and the sphere of “online” only intersected when we decided to open the door.

As I said last week, even the act of “going” online required a number of deliberate steps on our part. We had to choose a connected device, frame our intent and set a navigation path (often through a search engine). Each of these steps reinforced our sense that we were at the wheel in this particular journey. Consider it our security blanket against a technological loss of control.

But, as our technology becomes more intimate, whether it’s Google Glass, wearable devices or implanted chips, being “online” will cease to be about “going” and will become more about “being.”  As our interface with the virtual world becomes less deliberate, the paradigm becomes less about navigating a space that’s under our control and more about being an activated node in a vast network.

Being “online” will mean being “plugged in.” The lines between “online” and “ourselves” will become blurred, perhaps invisible, as technology moves at the speed of unconscious thought. We won’t be rationally choosing destinations, applications or devices. We won’t be keying in commands or queries. We won’t even be clicking on links. All the comforting steps that currently reinforce our sense of movement through a virtual space at our pace and according to our intent will fade away. Just as a light bulb doesn’t “go” to electricity, we won’t “go” online.  We will just be plugged in.

Now, I’m not suggesting a Matrix-like loss of control. I really don’t believe we’ll become feed sacs plugged into the mother of all networks. What I am suggesting is a switch from a rather slow, deliberate interface that operates at the speed of conscious thought to a much faster interface that taps into the speed of our subconscious cognitive processing. The impulses that will control the gateway of information, communication and functionality will still come from us, but it will be operating below the threshold of our conscious awareness. The Internet will be constantly reading our minds and serving up stuff before we even “know” we want it.

That may seem like neurological semantics, but it’s a vital point to consider. Humans have been struggling for centuries with the idea that we may not be as rational as we think we are. Unless you’re a neuroscientist, psychologist or philosopher, you may not have spent a lot of time pondering the nature of consciousness, but whether we actively think about it or not, it does provide a mental underpinning to our concept of who we are.  We need to believe that we’re in constant control of our circumstances.

The newly emerging definition of what it means to be “online” may force us to explore the nature of our control at a level many of us may not be comfortable with.

Losing My Google Glass Virginity

Originally published October 17, 2013 in Mediapost’s Search Insider

Rob, I took your advice.

A few columns back, when I said Google’s Glass might not be ready for mass adoption, fellow Search Insider Rob Garner gave me this advice:“Don’t knock it until you try it.”  So, when a fellow presenter at a conference I was at last week brought along his Glass and offered me a chance to try them (Or “it”? Does anyone else find Google’s messing around with plural forms confusing and irritating?), I took him up on it. To say I jumped at it may be overstating the case – let’s just say I enthusiastically ambled to it.

I get Google Glass. I truly do. To be honest, the actual experience of using them came up a little short of my expectations, but not much. It’s impressive technology.

But here’s the problem. I’m a classic early adopter. I always look at what things will be, overlooking the limitations of what currently “is.” I can see the dots of potential extending toward a horizon of unlimited possibility, and don’t sweat the fact that those dots still have to be connected.

On that level, Google Glass is tremendously exciting, for two reasons that I’ll get to in a second. For many technologies, I’ll even connect a few dots myself, willing to trade off pain for gain. That’s what early adopters do. But not everyone is an early adopter. Even given my proclivity for nerdiness, I felt a bit like a jerk standing in a hotel lobby, wearing Glass, staring into space, my hand cupped over the built-in mike, repeating instructions until Glass understood me. I learned there’s a new label for this; for a few minutes I became a “Glasshole.”Screen-Shot-2013-05-19-at-2.09.03-AM

Sorry Rob, I still can’t see the mainstream going down this road in the near future.

But there are two massive reasons why I’m still tremendously bullish on wearable technology as a concept. One, it leverages the importance of use case in a way no previous technology has ever done. And two, it has the potential to overcome what I’ll call “rational lag time.”

The importance of use case in technology can be summed up in one word: iPad. There is absolutely no technological reason why tablets, and iPads in particular, should be as popular as they are. There is nothing in an iPad that did not exist in another form before. It’s a big iPhone, without the phone. The magic of an iPad lies in the fact that it’s a brilliant compromise: the functionality of a smartphone in a form factor that makes it just a little bit more user-friendly. And because of that, it introduced a new use case and became the “lounge” device. Unlike a smartphone, where size limits the user experience in some critical ways (primarily in input and output), tablets offer acceptable functionality in a more enjoyable form. And that is why almost 120 million tablets were sold last year, a number projected (by Gartner) to triple by 2016.

The use case of wearable technology still needs to be refined by the market, but the potential to create an addictive user experiences is exceptional. Even with Glass’ current quirks, it’s a very cool interface. Use case alone leads me to think the recent $19 billion by 2018 estimate of the size of the wearable technology market is, if anything, a bit on the conservative side.

But it’s the “rational lag time” factor that truly makes wearable technology a game changer.  Currently, all our connected technologies can’t keep up with our brains. When we decide to do something, our brains register subconscious activity in about 100 milliseconds, or about one tenth of a second. However, it takes another 500 milliseconds (half a second) before our conscious brain catches up and we become aware of our decision to act. In more complex actions, a further lag happens when we rationalize our decision and think through our possible alternatives. Finally, there’s the action lag, where we have to physically do something to act on our intention. At each stage, our brains can shut down  impulses if it feels like they require too much effort.  Humans are, neurologically speaking, rather lazy (or energy-efficient, depending on how you look at it).

So we have a sequence of potential lags before we act on our intent: Unconscious Stimulation > Conscious Awareness > Rational Deliberation > Possible Action. Our current interactions with technology live at the end of this chain. Even if we have a smartphone in our pocket, it takes several seconds before we’re actively engaging with it. While that might not seem like much, when the brain measures action in split seconds, that’s an eternity of time.

But technology has the potential to work backward along this chain. Let’s move just one step back, to rational deliberation. If we had an “always on” link where we could engage in less than one second, we could utilize technology to help us deliberate. We still have to go through the messiness of framing a request and interpreting results, but it’s a quantum step forward from where we currently are.

The greatest potential (and the greatest fear) lies one step further back – at conscious awareness. Now we’re moving from wearable technology to implantable technology. Imagine if technology could be activated at the speed of conscious thought, so the unconscious stimulation is detected and parsed and by the time our conscious brain kicks into gear, relevant information and potential actions are already gathered and waiting for us. At this point, any artifice of the interface is gone, and technology has eliminated the rational lag. This is the beginning of Kurzweil’s Singularity: the destination on a path that devices like Google Glass are starting down.

As I said, I like to look at the dots. Someone else can worry about how to connect them.

Bounded Rationality in a World of Information

First published October 11, 2013 in Mediapost’s Search Insider.  

Humans are not good data crunchers. In fact, we pretty much suck at it. There are variations to this rule, of course. We all fall somewhere on a bell curve when it comes to our sheer rational processing power. But, in general, we would all fall to the far left of even an underpowered laptop.

Herbert Simon

Herbert Simon

Herbert Simon recognized this more than a half century ago, when he coined the term “bounded rationality.”  In a nutshell, we can only process so much information before we become overloaded, when we fall back on much more human approaches, typically known as emotion and gut instinct.

Even when we think we’re being rational, logic-driven beings, our decision frameworks are built on the foundations of emotion and intuition. This is not bad. Intuition tends to be a masterful way to synthesize inputs quickly and efficiently, allowing us generally to make remarkably good decisions with a minimum of deliberation. Emotion acts to amplify this process, inserting caution where required and accelerating when necessary. Add to this the finely honed pattern recognition instincts we humans have, and it turns out the cogs of our evolutionary machinery work pretty well, allowing us to adequately function in very demanding, often overwhelming environments.

We’re pretty efficient; we’re just not that rational. There is a limit to how much information we can “crunch.”

So when information explodes around us, it raises a question – if we’re not very good at processing data, what happen when we’re inundated with the stuff? Yes, Google is doing its part by helpfully “organizing the world’s information,” allowing us to narrow down our search to the most relevant sources, but still, how much time are we willing to devote to wading through mounds of data? It’s as if we were all born to be dancers, and now we’re stuck being insurance actuaries. Unlike Heisenberg (sorry, couldn’t resist the “Breaking Bad” reference) – we don’t like it, we’re not very good at it, and it doesn’t make us feel alive.

To make things worse, we feel guilty if we don’t use the data. Now, thanks to the Web, we know it’s there. It used to be much easier to feign ignorance and trust our guts. There are few excuses now. For every decision we have to make, we know that there is information which, carefully analyzed, should lead us to a rational, logical conclusion. Or, we could just throw a dart and then go grab a beer. Life is too short as it is.

When Simon coined the term “bounded rationality,” he knew that the “bounds” were not just the limits on the information available but also the limits of our own cognitive processing power and the limits on our available time. Even if you removed the boundaries on the information available (as is now happening) those limits to cognition and time would remain.

I suspect we humans are developing the ability to fool ourselves that we are highly rational. For the decisions that count, we do the research, but often we filter that information through a very irrational web of biases, beliefs and emotions. We cherry-pick information that confirms our views, ignore contradictory data and blunder our way to what we believe is an informed decision.

But, even if we are stuck with the same brain and the same limitations, I have to admit that the explosion of available information has moved us all a couple of notches to the right on Simon’s “satisficing” curve. We may not crunch all the information available, but we are crunching more than we used to, simply because it’s available.  I guess this is a good thing, even if we’re a little delusional about our own logical abilities.

Google Glass and the Sixth Dimension of Diffusion

First published August 29, 2013 in Mediapost’s Search Insider

Tech stock analyst and blogger Henry Blodget has declared Google Glass dead on arrival. I’m not going to spend any time talking about whether or not I agree with Mr. Blodget (for the record, I do – Google Glass isn’t an adoptable product as it sits – and I don’t – wearable technology is the next great paradigm shifter) but rather dig into the reason that he feels Google Glasses are stillborn.

They make you look stupid.

The input for Google Glass is your voice, which means you have to walk around saying things like, “Glass, take a video” or “Glass, what is the temperature?” The fact is, to use Google Glass, you either have to accept the fact that you’ll look like a moron or the biggest jerk in the world. Either way, the vast majority of us aren’t ready to step into that particular spotlight.

Last week, I talked about Everett Rogers’ Diffusion of Technology and shared five variables that determine the rate of adoption. There is actually an additional factor that Rogers also mentioned: “the status-conferring aspects of innovations emerged as the sixth dimension predicting rate of adoption.”

If you look at Roger’s Diffusion curve, you’ll find the segmentation of the adoption population is as follows: Innovators (2.5% of the population), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%)  and Laggards (16%).  But there’s another breed that probably hides out somewhere between Innovators and Early Adopters. I call them the PAs (for Pompous Asses). They love gadgets, they love spending way too much for gadgets, and they love being seen in public sporting gadgets that scream “PA.” Previously, they were the ones seen guffawing loudly into Bluetooth headsets while sitting next to you on an airplane, carrying on their conversation long after the flight attendant told them to wrap it up. Today, they’d be the ones wearing Google Glass.

 

This sixth dimension is critical to consider when the balance between the other five is still a little out of whack. Essentially, the first dimension, Relative Advantage, has to overcome the friction of #2, Compatibility, and #3, Complexity (#4, Trialability, and #5, Observability, are more factors of the actual mechanics of diffusion, rather then individual decision criteria). If the advantage of an innovation does not outweigh its complexity or compatibility, it will probably die somewhere on the far left slopes of Rogers’ bell curve. The deciding factor will be the Sixth Dimension.

This is the territory that Google Glass currently finds itself in. While I have no doubt that the advantages of wearable technology (as determined by the user) will eventually far outweigh the corresponding “friction” of adoption, we’re not there yet. And so Google Glass depends on the Sixth Dimension. Does adoption make you look innovative, securely balanced on the leading edge? Or does it make you look like a dork? Does it confer social status or strip it away? After the initial buzz about Glass, social opinion seems to be falling into the second camp.

This brings us to another important factor to consider when trying to cash in on a social adoption wave: timing. Google is falling into the classic Microsoft trap of playing its hand too soon through beta release. New is cool among the early adopter set, which makes timing critical. If you can get strategic distribution and build up required critical mass fast enough, you can lessen the “pariah” factor. It’s one thing to be among a select clique of technological PAs, but you don’t want to be the only idiot in the room. Right now, with only 8,000 pairs distributed, if you’re wearing a pair, you’re probably the one that everyone else is whispering about.

Of course, you might not be able to hear them over the sound of your own voice, as you stand in front of the mirror and ask Google Glass to “take a picture.”

 

Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Viewing the World through Google Colored Glass

First published March 7, 2013 in Mediapost’s Search Insider

Let’s play “What If” for a moment. For the last few columns, I’ve been pondering how we might more efficiently connect with digital information. Essentially, I see the stripping away of the awkward and inefficient interfaces that have been interposed between that information and us. Let’s imagine, 15 years from now, that Google Glass and other wearable technology provides a much more efficient connection, streaming real-time information to us that augments our physical world. In the blink of an eye, we can retrieve any required piece of information, expanding the capabilities of our own limited memories beyond belief. We have perfect recall, perfect information — we become omniscient.

To facilitate this, we need to move our cognitive abilities to increasingly subterranean levels of processing – taking advantage of the “fast and dirty” capabilities of our subliminal mind. As we do this, we actually rewire our brains to depend on these technological extensions. Strategies that play out with conscious guidance become stored procedures that follow scripts written by constant repetition. Eventually, overtraining ingrains these procedures as habits, and we stop thinking and just do. Once this happens, we surrender much of our ability to consciously change our behaviors.

Along the way, we build a “meta” profile of ourselves, which acts as both a filter and a key to the accumulated potential of the “cloud.” It retrieves relevant information based on our current context and a deep understanding of our needs, it unlocks required functionality, and it archives our extended network of connections. It’s the “Big Data” representation of us, condensed into a virtual representation that can be parsed and manipulated by the technology we use to connect with the virtual world.

In my last column, Rob Schmultz and Randy Kirk wondered what a world full of technologically enhanced Homo sapiens would look like. Would we all become the annoying guy in the airport that can’t stop talking on his Bluetooth headset? Would we become so enmeshed in our digital connections that we ignore the physical ones that lie in front of our own noses? Would Google Glass truly augment our understanding of the world, or iwould it make us blind to its charms? And what about the privacy implications of a world where our every move could instantly be captured and shared online — a world full of digital voyeurs?

I have no doubt that technology can take us to this not-too-distant future as I envisioned it. Much of what’s required already exists. Implantable hardware, heads up displays, sub-vocalization, bio-feedback — it’s all very doable. What I wonder about is not the technology, but rather us. We move at a much slower pace.  And we may not recognize any damage that’s done until it’s too late.

The Darwinian Brain

At an individual level, our brains have a remarkable ability to absorb technology. This is especially true if we’re exposed to that technology from birth. The brain represents a microcosm of evolutionary adaption, through a process called synaptic pruning. Essentially, the brain builds and strengthens neural pathways that are used often, and “prunes” away those that aren’t. In this way, the brain literally wires itself to be in sync with our environment.

The majority of this neural wiring happens when we’re still children. So, if our childhood environment happens to include technologies such as heads-up displays, implantable chips and other direct interfaces to digital information, our brains will quickly adapt to maximize the use of those technologies. Adults will also adapt to these new technologies, but because our brains are less “plastic” than that of children, the adaption won’t be as quick or complete.

The Absorption of Technology by Society

I don’t worry about our brain’s ability to adapt. I worry about the eventual impact on our society. With changes this portentous, there is generally a social cost. To consider what might come, it may be beneficial to look at what has been. Take television, for example.

If a technology is ubiquitous and effective enough to spread globally, like TV did, there is the issue of absorption. Not all sectors of society will have access to the technology at the same time. As the technology is absorbed at different rates, it can create imbalances and disruption. Think about the societal divide caused by the absorption of TV, which resulted in completely different information distribution paradigm. One can’t help thinking that TV played a significant role in much of the political change we saw sweep over the world in the past 3 decades.

And even if our brains quickly adapt to technology, that doesn’t mean our social mores and values will move as quickly. As our brains rewire to adapt to new technologies our cultural frameworks also need to shift. With different generations and segments of society at different places on the absorption curve, this can create further tensions. If you take the timeline of societal changes documented by Robert Putnam in “Bowling Alone” and overlay the timing of the adoption of TV, the correlation is striking and not a little frightening.

Even if our brains have the ability to adapt to technology, it isn’t always a positive change. For example, there is compelling evidence that early exposure to TV has contributed to the recent explosion of diagnosed ADHD and possibly even autism.

Knowing Isn’t Always the Same as Understanding

Finally, we have the greatest fear of Nicholas Carr:  maybe this immediate connection to information will have the “net” effect of making us stupid — or, at least, more shallow thinkers. If we’re spoon-fed information on demand, do we grow intellectually lazy? Do we start to lose the ability to reason and think critically? Will we swap quality for quantity?

Personally, I’m not sure Carr’s fears are founded on this front. It may be that our brains adapt and become even more profound and capable. Perhaps when we offload the simple journeyman tasks of retrieving information and compiling it for consideration to technology, our brains will be freed up to handle deeper and more abstract tasks. The simple fact is, we won’t know until it happens. It could be another “Great Leap Forward,” or it may mark the beginning of the decline of our species.

The point is, we’ve already started down the path, and it’s highly unlikely we’ll retreat at this point. I suppose we have no option but to wait and see.

Why I – And Mark Zuckerberg – are Bullish on Google Glass

First published February 28, 2013 in Mediapost’s Search Insider

Call it a Tipping Point. Call it an Inflection Point. Call it Epochal (what ever that means). The gist is, things are going to change — and they’re going to change in a big, big way!

First, with due deference to the brilliant Kevin Kelley, let’s look at how technology moves. In his book “What Technology Wants,” Kelley shows that technology is not dependent on a single invention or inventor. Rather, it’s the sum of multiple, incremental discoveries that move technology to a point where it can breach any resistance in its way and move into a new era of possibility. So, even if Edison had never lived, we’d still have electric lights in our home. If he weren’t there, somebody else would have discovered it (or more correctly, perfected it). The momentum of technology would not have been denied.

Several recent developments indicate that we’re on the cusp of another technological wave of advancement. These developments have little to do with online technologies or capabilities. They’re centered on how humans and hardware connect — and it’s impossible to overstate their importance.

The Bottleneck of Our Brains

Over the past two decades, there has been a massive build-up of online capabilities. In this case, what technology has wanted is the digitization of all information. That was Step One. Step Two is to render all that information functional. Step Three will be to make all the functionality personalized. And we’re progressing quite nicely down that path, thank you very much. The rapidly expanding capabilities of online far surpass what we are able to assimilate and use at any one time. All this functionality is still fragmented and is in the process of being developed (one of the reasons I think Facebook is in danger of becoming irrelevant) but it’s there. It’s just a pain in the butt for us to utilize it.

The problem is one of cognition. The brain has two ways to process information, one fast and one slow. The slow way (using our conscious parts of the brain) is tremendously flexible but inefficient. This is the system we’ve largely used to connect online. Everything has to be processed in the form of text, both in terms of output and input, generally through a keyboard and a screen display. It’s the easiest way for us to connect with information, but it’s far from the most efficient way.

The second way is much, much faster. It’s the subconscious processing of our environment that we do everyday.  It’s what causes us to duck when a ball is thrown at our head, jump out of the way of an oncoming bus, fiercely protect our children and judge the trustworthiness of a complete stranger. If our brains were icebergs, this would be the 90% hidden beneath the water. But we’ve been unable to access most of this inherent efficiency and apply it to our online interactions — until now.

The Importance of Siri and Glass

Say what you want about Mark Zuckerberg, he’s damned smart. That’s why he knew immediately that Google Glass is important.

I don’t know if Google Glass will be a home run for Google. I also don’t know if Siri will every pay back Apple’s investment in it. But I do know that 30 years from now, they’ll both be considered important milestones. And they’ll be important because they were representative of a sea change in how we connect with information. Both have the potential to unlock the efficiency of the subconscious brain. Siri does it by utilizing our inherent communication abilities and breaking the inefficient link that requires us not only to process our thoughts as language, but also laboriously translate them into keystrokes. In neural terms, this is one of the most inefficient paths imaginable.

But if Siri teases us with a potentially more efficient path, Google Glass introduces a new, mind-blowing scenario of what might be possible. To parse environment cues and stream information directly into our visual cortex in real time, creating a direct link with all that pent-up functionality that lives “in the cloud,” wipes away most of the inefficiency of our current connection paradigm.

Don’t think of the current implementation that Google is publicizing. Think beyond that to a much more elegant link between the vast capabilities of a digitized world and our own inner consciousness. Whatever Glass and Siri (and their competitors) eventually evolve into in the next decade or so, they will be far beyond what we’re considering today.

With the humanization of these interfaces, a potentially dark side effect will take place. These interfaces will become hardwired into our behavior strategies. Now, because our online interactions are largely processed at a conscious level, the brain tends to maintain maximum flexibility regarding the routines it uses. But as we access subconscious levels of processing with new interface opportunities, the brain will embed these at a similarly subconscious level. They will become habitual, playing out without conscious intervention. It’s the only way the brain can maximize its efficiency. When this happens, we will become dependent on these technological interfaces. It’s the price we’ll pay for the increased efficiency.