The Pros and Cons of a Fuel Efficient Brain

Transactive dyadic memory Candice Condon3Your brain will only work as hard as it has to. And if it makes you feel any better, my brain is exactly the same. That’s the way brains work. They conserve horsepower until when it’s absolutely needed. In the background, the brain is doing a constant calculation: “What do I want to achieve and based on everything I know, what is the easiest way to get there?” You could call it lazy, but I prefer the term “efficient.”

The brain has a number of tricks to do this that involve relatively little thinking. In most cases, they involve swapping something that’s easy for your brain to do in place of something difficult. For instance, consider when you vote. It would be extraordinarily difficult to weigh all the factors involved to truly make an informed vote. It would require a ton of brainpower. But it’s very easy to vote for whom you like. We have a number of tricks we use to immediately assess whether we like and trust another individual. They require next to no brainpower. Guess how most people vote? Even those of us who pride ourselves on being informed voters rely on these brain short cuts more than we would like to admit.

Here’s another example that’s just emerging, thanks to search engines. It’s called the Google Effect and it’s an extension of a concept called Transactive Memory. Researchers Betsy Sparrow, Jenny Liu and Daniel Wegner identified the Google Effect in 2011. Wegner first explained transactive memory back in the 80’s. Essentially, it means that we won’t both to remember something that we can easily reference when we need it. When Wegner first talked about transactive memory in the 80’s, he used the example of a husband and wife. The wife was good at remembering important dates, such as anniversaries and birthdays. The husband was good at remembering financial information, such as bank balances and when bills were due. The wife didn’t have to remember financial details and the husband didn’t have to worry about dates. All they had to remember was what each other was good at memorizing. Wegner called this “chunking” of our memory requirements “metamemory.”

If we fast-forward 30 years from Wegner’s original paper, we find a whole new relevance for transactive memory, because we now have the mother of all “metamemories”, called Google. If we hear a fact but know that this is something that can easily be looked up on Google, our brains automatically decide to expend little to no effort in trying to memorize it. Subconsciously, the brain goes into power-saver mode. All we remember is that when we do need to retrieve the fact, it will be a few clicks away on Google. Nicholar Carr fretted about whether this and other cognitive short cuts were making us stupid in his book “The Shallows.”

But there are other side effects that come from the brain’s tendency to look for short cuts without our awareness. I suspect the same thing is happening with social connections. Which would you think required more cognitive effort: a face-to-face conversation with someone or texting them on a smartphone?

Face-to-face conversation can put a huge cognitive load on our brains. We’re receiving communication at a much greater bandwidth than with text.   When we’re across from a person, we not only hear what they’re saying, we’re reading emotional cues, watching facial expressions, interpreting body language and monitoring vocal tones. It’s a much richer communication experience, but it’s also much more work. It demands our full attention. Texting, on the other hand, can easily be done along with other tasks. It’s asynchronous – we can pause and pick up when ever we want. I suspect its no coincidence that younger generations are moving more and more to text based digital communication. Their brains are pushing them in that direction because it’s less work.

One of the great things about technology is that it makes our life easier. But is that also a bad thing? If we know that our brains will always opt for the easiest path, are we putting ourselves in a long, technology aided death spiral? That was Nicholas Carr’s contention. Or, are we freeing up our brains for more important work?

More on this to come next week.

The Psychology of Usefulness: A New Model for Technology Acceptance.

In the last post, I reviewed the various versions of the Technology Acceptance Model. Today, I’d like to introduce my own thoughts on the subject and a proposed new model.  But first, I’d like to introduce an entirely new model to the discussion.

Introduction of Sense Making

I like Gary Klein’s Theory of Sense Making – a lot! And in the area of technology acceptance, I think it has to be part of the discussion. It introduces a natural Bayesian rhythm to the process that I think provides a intuitive foundation for our decisions on whether or not we’ll accept a new technology.

Kleins-Data-Frame-Model-of-Sensemaking

Gary Klein et al – Sensemaking Model How Might “Transformational” Technologies and Concepts be Barriers to Sensemaking in Intelligence Analysis

Essentially, the Sense Making Model says that when we try to make sense of something new, we begin with some type of perspective, belief or viewpoint. In Bayesian terms, this would be our prior. In Klein’s model, he called it a frame.

Now, this frame doesn’t only give us a context in which to absorb new data, it actually helps define what counts as data. This is a critical concept to remember, because it dramatically impacts everything that follows. Imagine, for example, that you arrive on the scene of a car accident. If your frame was that of a non-involved bystander, the data you might seek in making sense of the situation would be significantly different than if your frame was that of a person who recognized one of the vehicles involved as belonging to your next-door neighbor.

In the case of technology acceptance, this initial frame will shape what types of data we would seek in order to further qualify our decision. If we start with a primarily negative attitude, we would probably seek data that would confirm our negative bias. The opposite would be true if we were enthusiastic about the adoption of technology. For this reason, I believe the creation of this frame should be a step in any proposed acceptance model.

But Sense Making also introduces the concept of iterative reasoning. After we create our frame, we do a kind of heuristic “gap analysis” on our frame. We prod and poke to see where the weaknesses are. What are the gaps in our current knowledge? Are there inconsistencies in the frame? What is our level of conviction on our current views and attitudes? The weaker the frame, the greater our need to seek new data to strengthen it. This process happens without a lot of conscious consideration. For most of us, this testing of the frame is probably a subconscious evaluation that then creates an emotional valence that will impact future behavior. On one extreme, it could be a strongly held conviction, on the other it would be a high degree of uncertainty.

If we decide we need more data, the Sense Making Model introduces another “Go/No Go” decision point. If the new data confirms our initial frame, we elaborate that frame, making it more complete. We fill in gaps, strengthen beliefs, discard non-aligned data and update our frame. If our sense making is in support of a potential action and we seem to be heading in the right direction with our data foraging, this can be an iterative process that continually updates our frame until it’s strong enough to push us over the threshold of executing that action.

But, if the new data causes serious doubt about our initial frame, we may need to consider “reframing,” in which case we’d have to seek new frames, compare against our existing one and potentially discard it in favor of one of the new alternatives. This essentially returns us to square one, where we need to find  data to elaborate the new frame. And there the cycle starts again.

This double loop learning process illustrates that a decision process, such as accepting a new technology, can loop back on itself at any point, and may do so at several points. More than this, it is always susceptible to a “reframing” incident, where new data may cause the existing frame to be totally discarded, effectively derailing the acceptance process.

Revisiting Goal Striving

I also like Bagozzi’s Goal Striving model, for reasons outlined in a previous post. I won’t rehash them here, except to say that this model introduces a broader context that is more aligned with the complexity of our typical decision process. In this case, our desire to achieve goals is a fundamental part of the creation of the original frame, which forms the starting point for our technology acceptance decision. In this case, the Goal Desire step, at the left side of the model, could effectively be the frame that then gets updated as we move from Goal Intention to Behavioral Desire and then once again as we move to Behavioral Intention. All the inputs shown in Bagozzi’s model, shown as both external factors (ie Group Norms) and internal factors (Emotions, etc) would serve as data in either the updating or reframing loops in Klein’s model.

Bagozzis-purchasing-behavior-adoption-model

A New Model

As the final step in this rather long process I’ve been dragging you through for the last several posts, I put forward a new proposed model for technology acceptance.

Slide1

I’ve attempted to include elements of Sense Making, Goal Striving and some of the more valuable elements from the original Technology Acceptance Models. I’ve also tried to show that this in an iterative journey – a series of data gathering and consideration steps, each one of which can result in either a decision to move forward (elaborate the frame) or move backwards to a previous step (reframe a frame). The entire model is shown below, but we’ll break it down into pieces to explore each step a little more deeply.

 

Setting the Frame

Gord Tam 1

The first step is to set the original frame, which is the Goal Intention. In this case, a goal is either presented to us, or we set the goal ourselves. The setting of this goal is the trigger to establish both a cognitive and emotional frame that sets the context for everything that follows. Factors that go into the creation of the Goal Intention can include both positive and negative emotions, our attitudes towards the success of the goal, how it will impact our current situation (affect towards the mean), and what we expect as far as outcomes. These factors will determine how  robust our Goal Intention is, which will factor heavily in any subsequent decisions that are made as part of this Goal Intention, including the decision to accept or reject any relevant technologies required to execute on our Goal Intention.

We can assume, because there is not an updating step shown here, that once the Goal Intention is formed, the person will move forward to the next step – the retrieval of internal information and the creation of our attitude towards the Goal to be achieved.

The Internal Update

Gord Tam 2

With the setting of the goal intention, we have our frame. Now, it’s up to us to update that frame. Again, our confidence in this initial frame will determine how much data we feel we need to connect to update our frame. This follows Herbert Simon’s heuristic rules of thumb for Bounded Rationality. If we’re highly confident in our frame (to the point where it’s entrenched as a belief) we’ll seek little or no data, and if we do, the data we seek will tend to be confirmatory. If we’re less confident in our frame, we’ll actively go and forage for more data, and we’ll probably be more objective in our judgement of that data. Again, remember, Klein’s Sense Making model says that our frame determines what we define as data.

The first update will be a heuristic and largely subconscious one. We’ll retrieve any relevant information from our own memory. This information, which may be positive or negative in nature, will be assembled into an “attitude” towards the technology. This is our first real conscious evaluation of the technology in question. This would be akin to a Bayesian “prior” – a starting point for subsequent evaluation. It also represents an updating of the original frame. We’ve moved from Goal Intention to a emotional judgement of the technology to be evaluated.

The creation of the “Attitude” also requires us to begin the Risk/Reward balancing, similar to Charnov’s Marginal Value Theorem used in optimal foraging. Negative items we retrieve increase risk, positive ones increase reward. The balance between the two determine our next action. From this point forward, each updating of the frame leads us to a new decision point. At this decision point, we have to decide whether we move forward (elaborate our frame) or return to an earlier point in the decision process, with the possibility that we may need to reframe at that point. Each of these represents a “friction point” in the decision process, with reward driving the process forward and risk introducing new friction. At the attitude state, excessive risk may cause us to go all the way back to reconsidering the goal intention. Does the goal as we understand it still seem like the best path forward, given the degree of risk we have now assigned to the execution of that goal?

Let’s assume we’ve decided to move forward. Now we have to take that Attitude and translate it into Desire. Desire brings social aspects into the decision. Will the adoption of the technology elevate our social status? Will it cause us to undertake actions that may not fit into the social norms of the organization, or square well with our own social ethics? These factors will have a moderating effect on our desire. Even if we agree that the technology in question may meet the goal, our desire may flag because of the social costs that go along with the adoption decision. Again, this represents a friction point, where our desire may be enough to carry us forward, or where it may not be strong enough, causing us to re-evaluate our attitude towards the technology. If we bump back to the “Attitude” stage, a sufficiently negative judgement may in turn bump us even further back to goal intention.

The External Update

Gord TAM 3

With the next stage, we’ve moved from Desire to Intention. Up to now the process has been primarily internal and also primarily either emotional or heuristic. There has been little to no rational deliberation about whether or not to accept the technology in question. The frame that has been created to this point is an emotional and attitudinal frame.

But now, assuming that this frame is open to updating with more information, the process becomes more open to external variables and also to the input of data gathered for the express purpose of rational consideration. We start openly canvassing the opinions of others (subjective norm) and evaluating the technology based on predetermined factors. In the language of marketing, this is the consumer’s “consideration” stage. We know the next step is Action – where our intention becomes translated into behavior. In the previous TAM models, this step was a foregone conclusion. Here, however, we see that it’s actually another decision friction point. If the data we gather doesn’t support our intention, action will not result. We will loop back to Goal Intention and start looking for alternatives. At the very least, this one stage may loop back on itself, resulting in iterative cycles of setting new data criteria, gathering this data and pushing towards either a “go” or “no go” decision. Only when there is sufficient forward momentum will we move to action.

Here, at the Action stage, our evaluation will rely on experiential feedback. At this point, we resurrect the concepts of “Ease of Use” and “Perceived Usefulness” from previous versions of TAM. In this case, the Intention stage would have constructed an assumed “prior” for each of these – a heuristic assessment of how easy it will be to use the technology and also the usefulness of it. This then gets compared to our actual use of the technology. If the bar of our expectations is not met, the degree of friction increases, holding us back from repeating the action, which is required to entrench it as a behavior. This will be a Charnovian balancing act. If the usefulness is sufficient, we will put up with a shortfall in the perceived ease of use. On the flip side, no matter how easy the tool is to use, if it doesn’t deliver on our expectation of usefulness, it will get rejected. Too much friction at this point will result in a loop back to the Intention stage (where we may reassess our evaluation of the technology to see if the fault lies with us or with the tool) and will possibly cause a reversion all the way to our Goal Intention.

If our experience meets our expectation, repetition will begin to create an organizational behavior. At this stage, we move from trial usage to embedding the technology into our processes. At this point, organizational feedback becomes the key evaluative criterion. Even if we love the technology, sufficient negative feedback from the organization will cause us to re-evaluation our intention. Finally, if the technology being evaluated successfully navigates past this chain of decision points without becoming derailed, it becomes entrenched. We then evaluate if it successfully plays its part in our attainment of our goals. This brings up full circle, back to the beginning of the process.

Summing Up

The original goal of the Technology Acceptance Model was to provide a testable model to predict adoption. My goal is somewhat different, showing Technology Adoption as a series of Sense Making and Goal Attainment decisions, each offering the opportunity to move forward to the next stage or loop back to a previous stage. In extreme cases, it may result in outright rejection of the technology. As far as testing for predictability, this is not the parsimonious model envisioned by Venkatesh, but then again, I suspect parsimony was sacrificed even by the Venkatesh and contributing authors somewhere between the multiple revisions that were offered.

This is a model of Bayesian decision making, and I believe it could be applied to many considered decision scenarios. One could map most higher end consumer purchases on the same decision path. The value of the model is in understanding each stage of the decision path and the factors that both introduce risk related friction and reward related momentum. Ideally, it would be fascinating to start to identify representative risk/reward thresholds at each point, so factors can be rebalanced to achieve a successful outcome.

As we talk about the friction in these decision points, it’s also important to remember that we will all have different set points about how we balance risk and reward. When it comes to technology acceptance, our set point will determine where we fall on Everett Roger’s Diffusion of Technology distribution curve.

 

Those with a high tolerance for risk and an enhanced ability to envision reward will fall to the far left of the curve, either as Innovators or Early Adopters. Rogers noted in The Diffusion of Innovation:

Innovators may…possess a type of mental ability that better enables them to cope with uncertainty and to deal with abstractions. An innovator must be able to conceptualize relatively abstract information about innovations and apply this new information to his or her own situation

Those with a low tolerance for risk and an inability to envision rewards will be to the far right, falling into the Laggard category. The rest of us, representing 68% of the general population, will fall  somewhere in between. So, in trying to predict the acceptance of any particular technology, it will be important to assess the innovativeness of the individual making the decision.

This hypothetical model represents a culmination of the behaviors I’ve observed in many B2B adoption decisions. I’ve always stressed the importance of understanding the risk/reward balance of your target customers. I’ve also mapped out how this can vary from role to role in organizational acceptance decisions.

This post, which is currently pushing 3000 words, is lengthy enough for today. In the next post, I’ll revisit what this new model might mean for our evaluation of usefulness and subsequent user loyalty.

The Psychology of Usefulness: The Acceptance of Technology – Part One

oldpeopletech7_317161In the last post, I talked about what it takes to break a habit built around an online tool, website or application. In today’s post, I want to talk about what happens when we decide to replace that functional aid, whatever it might be.

So, as I said last time, the biggest factor contributing to the breakdown of habit is the resetting of our expectation of what is an acceptable outcome. If our current tools no longer meet this expectation, then we start shopping for a new alternative. In marketing terms, this would be the triggering of need.

Now, this breakdown of expectation can play out in one of two ways. First, if we’re not aware of an alternative solution, we may just feel an accumulation of frustration and dissatisfaction with our current tools. This build up of frustration can create a foundation for further “usefulness foraging” but generally isn’t enough by itself to trigger action. This lends support to my hypothesis that we’re borrowing the evolved Marginal Value algorithm to help us judge the usefulness of our current tools. To put it in biological terms we’re more familiar with, “A bird in the hand is worth two in the bush.” You don’t leave a food patch unless: A) you are reasonably sure there’s another, more promising, patch that can be reached with acceptable effort or B) you have completely exhausted the food available in the patch you’re in. I believe the same is true for usefulness. We don’t throw out what we have until we either know there’s an acceptable alternative that promises a worthwhile increase in usefulness or our current tool is completely useless. Until then, we put up with the frustration.

The Technology Acceptance Model

Let’s say that we have decided that it’s worth the effort to find an alternative. What are the mechanisms we use to find the best alternative? Fred Davis and Richard Bagozzi tackled that question in 1989 and came up with the first version of their Technology Acceptance Model. It took the Theory of Reasoned Action, developed by Martin Fishbein and Icek Ajzen, put forward a decade earlier (1975, 1980) and tried to apply it to the adoption of a new technology. They also relied on the work Everett Rogers did in the diffusion of technology.

First of all, like all models, the TAM had to make some assumptions to simplify real world decisions down to a theoretical model. And, in doing so, it has required a number of revisions to try to bring it closer to what technology adoption decisions look like in the real world.

Let’s start with the foundation of the Theory of Reasoned Action. In it’s simplest form, the TRA says that voluntary behavior is predicted by an individual’s attitude towards that behavior and how they think others would think of them if they performed that behavior.

TRA

So, let’s take the theory for a test drive – if you believe that exercising will increase your health and you also believe that others in your social circle will applaud you for exercising, you’ll exercise. With this example, I think you begin to see where the original TRA may run into problems. Even with the best of intentions, we may not actually make it to the gym. Fishbein and Ajzen’s goal was to create an elegant, parsimonious model that would reliably predict both behaviors and intentions, creating a distinction between the two. Were they successful?

In a meta-analysis of TRA, Sheppard et al (1988) found that attitude was a fairly accurate predictor of intention. If you believe going to the gym is a good thing, you will probably intend to go to the gym. The model didn’t do quite as good a job in predicting behavior. Even if you did intend to go to the gym, would you actually go?

The successful progression from intention to behavior seemed to be reliant on several real world factors, including the time between intention and action (the longer the time interval, the more the degree of erosion of intention) and also lack of control. For example, in the gym example, what if your gym suddenly increased it’s membership fees, or a sudden snowstorm made it difficult to drive there.

Also, if you were choosing from a set of clear alternatives and had to choose one, TRA did a pretty good job of predicting behaviors. But if alternatives were undetermined, or there were other variables to consider, then the predictive accuracy of TRA dropped significantly.

Let me offer an example of how TRA might not work very well in a real world setting. In my book, The BuyerSphere Project, I spent a lot of time looking at the decision process in B2B buying scenarios. If we used the TRA model, we could say that if a buyer had to choose between 4 different software programs for their company, we could use their attitudes towards each of the respective programs as well as the aggregated (and weighted  – because not every opinion should carry the same weight) attitudes towards these programs of the buyer’s co-workers, peers and bosses to determine their intention. And once we have their intention, that should lead to behavior.

But in this scenario, let’s look at some of the simplifying assumptions we’ve had to make to try to cram a real world scenario into the Fishbein Azjen model:

  • We assume a purchase will have to be made from one of the four alternatives. In a real world situation, the company may well decide to stick with what they have
  • We assume the four choices will remain static and we won’t get a new candidate out of left field
  • We assume that attitudes towards each of the alternatives will remain static through the behavioral interval and won’t change. This almost never happens in B2B buying scenarios
  • We assume the buyer – or rational agent – will be in full control of their behaviors and the ultimate decision. Again, this is rarely the case in B2B buying decisions.
  • We assume that there won’t be some mitigating factor that arises in between intention and behavior – for example a spending freeze or a change in requirements.

As you can see, in trying to create a parsimonious model, Fishbein and Azjen ran into a common trap – they had to simplify to the point where it failed to work consistently in the real world.

But, in this review by Alice Darnell, she pointed out Sheppard’s main criticism of the TRA model:

Sheppard et al. (1988) also addressed the model’s main limitation, which is that it fails to account for behavioural outcomes which are only partly under the individual’s volitional control.

I’ve added bolding to the word volitional on purpose. I’ve highlighted many external factors that may lie beyond the volitional control of the individual, but I think the biggest limitation of the TRA lies in its name: Theory of Reasoned Action. It assumes that reason drives our intentions and behaviors. It doesn’t account for emotion.

Applying Reasoned Action to Technology Acceptance

Now, let’s see how Rogers and Bagozzi took Fishbein and Azien’s foundational work and applied it to the acceptance of new technologies.

In their first model (1989) they took attitudes and subjective norms (the attitudes of others) and adapted them for a more applied activity, the use of a new technological tool. They came up with two attitude drivers: Perceived Usefulness and Perceived Ease of Use. If you think back to Charnov’s Marginal Value Theorem, this is exactly the same risk/reward mechanism at work here.  In foraging, it would be yield of food over perceived required effort. In Technology Acceptance, Perceived Usefulness is the reward and Perceived Ease of Use is the risk to be calculated. In the mental calculation, Rogers and Bagozzi assume the user would do a quick mental calculation, using their own knowledge and the knowledge of others to come up with a Usefulness/Ease value that would create their attitude towards using.  This then becomes their Behavioral Intention to Use – which should lead to Actual System Use.

tam

The TAM model was clean and parsimonious. There was just one problem. It didn’t do a very good job of predicting usage in real world situations. There seemed to be much more at work here in actual decisions to accept technologies. In the next post, we’ll look at how the TAM model was modified to bring it closer to real behaviors.

The Psychology of Usefulness: How Online Habits are Broken

google-searchLast post, I talked about how Google became a habit – Google being the most extreme case of online loyalty based on functionality I could think of. But here’s the thing with functionally based loyalty – it’s very fickle. In the last post I explained how Charnov’s Marginal Value Theorem dictates how long animals spend foraging in a patch before moving on to the next one. I suspect the same principles apply to our judging of usefulness. We only stay loyal to functionality as long as we believe there are no more functional alternatives available to us for an acceptable investment of effort. If that functionality has become automated in the form of a habit, we may stick with it a little longer, simply because it takes our rational brain awhile to figure out there may be better options, but sooner or later it will blow the whistle and we’ll start exploring our options. Charnov’s internal algorithm will tell us it’s time to move on to the next functional “patch.”

Habits break down when there’s a shift if one of the three prerequisites: frequency, stability or acceptable outcomes.

If we stop doing something on a frequent basis, the habit will slowly decay. But because habits tend to be stored at the limbic level (in the basal ganglia), they prove to be remarkably durable. There’s a reason we say old habits die hard. Even after a long hiatus we find that habits can easily kick back in. Reduction of frequency is probably the least effective way to break a habit.

A more common cause of habitual disruption is a change in stability. Suddenly, if something significant changes in our task environment, our  “habit scripts” start running into obstacles. Think about the last time you did a significant upgrade to a program or application you use all the time. If menu options or paths to common functions change, you find yourself constantly getting frustrated because things aren’t where you expect them to be. Your habit scripts aren’t working for you anymore and you are being forced to think. That feeling of frustration is how the brain protects habits and shows how powerful our neural energy saving mode is. But, even if the task environment becomes unstable for a time, chances are the instability is temporary. The brain will soon reset its habits and we’ll be back plugging subconsciously away at our tasks. Instability does break a habit, but it just rebuilds a new one to take its place.

A more permanent form of habit disruption comes when outcomes are no longer acceptable. The brain hates these types of disruptions, because it knows that finding an alternative could require a significant investment of effort. It basically puts us back at square one. The amount of investment required is dependent on a number of things, including the scope of change required (is it just one aspect of a multi-step task or the entire procedure?), current awareness of acceptable alternatives (is a better solution near at hand or do we have to find it?), the learning curve involved (how different is the alternative from what we’re used to using), are there other adoption requirements (do we have to make an investment of resources – including time and/or money?) and how much down time will be involved in order to adopt the alternative. All these questions are the complexities that can be factors in the Marginal Value Theorem.

Now, let’s look at how each of these potential habit breakers applies to Google. First of all, frequency probably won’t be a factor because we will search more, not less, in the future.

Stability may be a more likely cause. The fact is, the act of online searching hasn’t really changed that much in the last 20 years. We still type in a query and get a list of results. If you look at Google circa 1998, it looks a little clunky and amateurish next to today’s results page, but given that 16 years have come and gone, the biggest surprise is that the search interface hasn’t changed more than it has.

Google now and then

A big reason for this is to maintain stability in the interface, so habits aren’t disrupted. The search page relies on ease of information foraging, so it’s probably the most tested piece of online real estate in history. Every pixel of what you see on Google, and, to a lesser extent, it’s competitors, has been exhaustively tested.

That has been true in the past but because of the third factor, acceptability of outcomes, it’s not likely to remain true in the future. We are now in the age of the app. Searching used to be a discrete function that was just one step of many required to complete a task. We were content to go to a search engine, retrieve information and then use that information elsewhere with other tools or applications. In our minds, we had separate chunks of online functionality that we would assemble as required to meet our end goal.

Let me give you an example. Let’s imagine we’re going to London for a vacation. In order to complete the end goal – booking flights, hotels and whatever else is required – we know we will probably have to go to many different travel sites, look up different types of information and undertake a number of actions. We expect that this will be the best path to take to our end goal. Each chunk of this “master task” may in turn be broken down into separate sub tasks. Along the way, we’ll be relying on those tools that we’re aware of and a number of stored procedures that have proven successful in the past. At the sub-task level, it’s entirely possible that some of those actions have been encoded as habits. For an example of how these tasks and stored procedures would play out in a typical search, see my previous post, A Cognitive Walkthrough of Searching.

But we have to remember that the only reason the brain is willing to go to all this work is that it believes it’s the most efficient route available to it. If there were a better alternative that would produce an acceptable outcome, the brain would take it. Our expectation of what an acceptable outcome would be would be altered, and our Marginal Value algorithm would be reset.

Up to now, functionality and information didn’t intersect too often online. There were places we went to get information, and there were places we went to do things. But from this point forward, expect those two aspects of online to overlap more and more often. Apps will retrieve information and integrate it with usefulness. The travel aggregator sites like Kayak and Expedia are an early example of this. They retrieve pricing information from vendors, user content from review sites and even some destination related information from travel sites. This ups the game in terms of what we expect from online functionality when we book a trip. Our expectation has been reset because Kayak offers a more efficient way to book travel than using search engines and independent vendor sites. That’s why we don’t immediately go to Google when we’re planning a trip.

Let’s fast-forward a few years to see how our expectations could be reset in the future. I suspect we’re not too far away from having an app where our travel preferences have been preset. This proposed app would know how we like to travel and the things we like to do when we’re on vacation. It would know the types of restaurants we like, the attractions we visit, the activities we typically do, the types of accommodation we tend to book, etc.  It would also know the sources we tend to use when qualifying our options (i.e. TripAdvisor). If we had such an app, we would simply put in the bare details of our proposed trip: departure and return dates, proposed destinations and an approximate itinerary. It would then go and assemble suggestions based on our preferences, all in one location. Booking would require a simple click, because our payment and personal information would be stored in the app. There would be no discrete steps, no hopping back and forth between sites, no cutting and pasting of information, no filling out forms with the same information multiple times. After confirmation, the entire trip and all required information would be made available on your mobile device.  And even after the initial booking, the app would continue to comb the internet for new suggestions, reviews or events that you might be interested in attending.

This “mega-app” would take the best of Kayak, TripAdvisor, Yelp, TripIt and many other sites and combine it all in one place. If you love travel as much as I do, you couldn’t wait to get your hands on such an app. And the minute you did, your brain would have reset it’s idea of what an acceptable outcome would be. There would be a cascade of broken habits and discarded procedures.

This integration of functionality and information foraging is where the web will go next. Over the next 10 years, usefulness will become the new benchmark for online loyalty. As this happens, our expectation set points will be changed over and over again. And this, more than anything, will be what impacts user loyalty in the future. This changing of expectations is the single biggest threat that Google faces.

In the next post I’ll look at what happens when our expectations get reset and we have to look at adopting a new technology.

Our Brain on Books

Brain-on-BooksHere’s another neuroscanning study out of Emory University showing the power of a story.

Lead researcher Gregory Burns and his team wanted to “understand how stories get into your brain, and what they do to it.” Their findings seem to indicate that stories, in this case a historical fiction novel about Pompeii, caused a number of changes in the participants brain, at least in the short term. Over time, some of these changes decayed, but more research is required to determine how long lasting the changes are.

One would expect reading to alter related parts of the brain and this was true in the Emory study. The left temporal cortex, a section of the brain that handles language reception and interpretation showed signs of heightened connectivity for a period of time after reading the novel. This is almost like the residual effects of exercise on a muscle, which responds favorably to usage.

What was interesting, however, was that the team also saw increased connectivity in the areas of the brain that control representations of sensation for the body. This relates to Antonio Damasio’s “Embodied Semantics” theory where the reading of metaphors, especially those relating specifically to tactile images, activate the same parts of the brain that control the corresponding physical activity. The Emory study (and Damasio’s work) seems to show that if you read a novel that depicts physical activity, such as running through the streets of Pompeii as Vesuvius erupts, your brain is firing the same neurons as it would if you were actually doing it!

There are a number of interesting aspects to consider here, but what struck me is the multi-prong impact a story has on us. Let’s run through them:

Narratives have been shown to be tremendously influential frameworks for us to learn and update our sense of the world, including our own belief networks. Books have been a tremendously effect agent for meme transference and propagation. The structure of a story allows us to grasp concepts quickly, but also reinforces those concepts because it engages our brain in a way that a simple recital of facts could not. We relate to protagonists and see the world through their eyes. All our socially tuned, empathetic abilities kick into action when we read a story, helping to embed new information more fully. Reading a story helps shape our world view.

Reading exercises the language centers of our brain, heightening the neural connectivity and improving the effectiveness. Neurologists call this “shadow activity” – a concept similar to muscle memory.

Reading about physical activity fires the same neurons that we would use to do the actual activity. So, if you read an action thriller, even through you’re lying flat on a sofa, your brain thinks you’re the one racing a motorcycle through the streets of Istanbul and battling your arch nemesis on the rooftops of Rome. While it might not do much to improve muscle tone, it does begin to create neural pathways. It’s the same concept of visualization used by Olympic athletes.

For Future Consideration

As we learn more about the underlying neural activity of story reading, I wonder how we can use this to benefit ourselves? The biggest question I have is if a story in written form has this capacity to impact us at all the aforementioned levels, what would  more sense-engaged media like television or video games do? If reading about a physical activity tricks the brain into firing the corresponding sensory controlling neurons, what would happen if we are simulating that activity on an action controlled gaming system like Microsoft’s X Box? My guess would be that the sensory motor connections would obviously be much more active (because we’re physically active). Unfortunately, research in the area of embodied semantics is still at an early stage, so many of the questions have yet to be answered.

However, if our stories are conveyed through a more engaging sensory experience, with full visuals and sound, do we lose some opportunity for abstract analysis? The parts of our brain we use to read depend on relatively slow processing loops. I believe much of the power of reading lies in the requirements it places on our imagination to fill in the sensory blanks. When we read about a scene in Pompeii we have to create the visuals, the soundtrack and the tactile responses. In all this required rendering, does it more fully engage our sense-making capabilities, giving us more time to interpret and absorb?

Psychological Priming and the Path to Purchase

First published March 27, 2013 in Mediapost’s Search Insider

In marketing, I suspect we pay too much attention to the destination, and not enough to the journey. We don’t take into account the cumulative effect of the dozens of subconscious cues we encounter on the path to our ultimate purchase. We certainly don’t understand the subtle changes of direction that can result from these cues.

Search is a perfect example of this.

As search marketers, we believe that our goal is to drive a prospect to a landing page. Some of us worry about the conversion rates once a prospect gets to the landing page. But almost none of us think about the frame of mind of prospects once they reach the landing page.

“Frame” is the appropriate metaphor here, because the entire interaction will play out inside this frame. It will impact all the subsequent “downstream” behaviors. The power of priming should not be taken likely.

Here’s just one example of how priming can wield significant unconscious power over our thoughts and actions. Participants primed by exposure to a stereotypical representation of a “professor” did better on a knowledge test than those primed with a representation of a “supermodel.”

A simple exposure to a word can do the trick. It can frame an entire consumer decision path. So, if many of those paths start with a search engine, consider the influence that a simple search listing may have.

We could be primed by the position of a listing (higher listings = higher quality alternatives).  We could be primed (either negatively or positively) by an organization that dominates the listing real estate. We could be primed by words in the listing. We could be primed by an image. A lot can happen on that seemingly innocuous results page.

Of course, the results page is just one potential “priming” platform. Priming could happen on the landing page, a third-party site or the website itself. Every single touch point, whether we’re consciously interacting with it or not, has the potential to frame, or even sidetrack, our decision process.

If the path to purchase is littered with all these potential landmines (or, to take a more positive approach, “opportunities to persuade”), how do we use this knowledge to become better marketers? This does not fall into the typical purview of the average search marketer.

Personally, I’m a big fan of the qualitative approach (I know — big surprise) in helping to lay down the most persuasive path possible. Actually talking to customers, observing them as they navigate typical online paths in a usability testing session, and creating some robust scenarios to use in your own walk-throughs will yield far better results than quantitative number-crunching. Excel is not a particularly good at being empathetic.

Jakob Nielsen has said that online, branding is all about experience, not exposure. As search marketers, it’s our responsibility to ensure that we’re creating the most positive experience possible, as our prospects make their way to the final purchase.

The devil, as always, is in the details — whether we’re paying conscious attention to them or not.

Weighing Positive and Negative Impacts on Users

First published January 31, 2013 in Mediapost’s Search Insider

We humans hate loss. In fact, we seem to value losing something about twice as high as gaining something. For example, imagine I gave you a coffee cup and then offered to buy it back from you. That’s scenario 1. In scenario 2, I ask you to buy the same coffee cup from me. The price you assign to the coffee cup in the first scenario will be, on the average, about twice as much as in the second. And yes, there’s research to back this up.

When it comes to winning and losing, it’s been proven that “loss looms larger than gains.” It’s just one of the weird glitches in our logical circuitry.  We tend to be hardwired to look at glasses as half empty.

Recently, I was reviewing an academic study done in 2008, with this scintillating title: “Procedural Priming and Consumer Judgment: Effects on the Impact of Positively and Negatively Valenced Information” by Shen and Wyer. If you can get beyond the rather dry title, you find a treasure trove of tidbits to consider when crafting your online user experience.

For example, when we evaluate a product for potential purchase, we may run across both positive and negative information. The order we run into this information can have a dramatic impact on what we do downstream from that interaction. To use psychological terms, it “primes” our mental framework.  And, because we tend to focus on negatives, less favorable information has a greater impact on our decision than positive information.

But it’s not just that we pay more attention to bad news than good news. It’s that bad news can hijack the entire consideration process. According to Shen and Wyer, if we run into negative information, it can change our information-seeking strategies, leading us down further negatively biased channels to confirm the initial information we saw. Bad news tends to lead to more bad news.

Also, we can get “bad news” hangovers. If we compare negatives in one decision process, that negative mental framework can carry over to an entirely different decision that has nothing to do with the first, giving us a heightened awareness of negative information in the new situation.

Here’s another interesting finding. If we’re rushed for time, this preoccupation with the negatives will dramatically affect the decision we make. But, if we have all the time in the world, the impact is relatively insignificant. Given time, we seem to cancel out our inherently negative biases.

All this news is not bad for marketers, however. It seems that simply getting users to state their preference for one feature over another, even though they’re not actively considering purchase at that time, leads to a much greater likelihood of purchase in the future. It seems that if you can get users to compare alternatives — and, more importantly, to commit to saying they prefer one alternative over another — they clear the mental hurdle of deciding “will I buy?” and instead start considering  “what will I buy?”

Finally, there is also a recency effect, especially if prospects had ample time to consider all their alternatives. Shen and Wyer found that the last information considered seemed to have the greatest effect on the buyer.  So, if information was both positive and negative, it was good to get the least favorable information in front of the prospect early, and then move to the most favorable information. Again, this is true only if the user had plenty of time to weigh the options. If they were rushed, the opposite was true.

All in all, these are all intriguing concepts to consider when crafting an ideal online user experience. They also underscore the importance of first impressions, especially negative ones.

A Look at the Future through Google Glasses?

First published June 7, 2012 in Mediapost’s Search Insider

“A wealth of information creates a poverty of attention.” — Herbert Simon

Last week, I explored the dark recesses of the hyper-secret Google X project.  Two X Projects in particular seem poised to change our world in very fundamental ways: Google’s Project Glass and the “Web of Things.”

Let’s start with Project Glass. In a video entitled “One Day…,” the future seen through the rose-colored hue of Google Glasses seems utopian, to say the least. In the video, we step into the starring role, strolling through our lives while our connected Google Glasses feed us a steady stream of information and communication — a real-time connection between our physical world and the virtual one.

In theory, this seems amazing. Who wouldn’t want to have the world’s sum total of information available instantly, just a flick of the eye away?

Couple this with the “Web of Things,” another project said to be in the Google X portfolio.  In the Web of Things, everything is connected digitally. Wearable technology, smart appliances, instantly findable objects — our world becomes a completely inventoried, categorized and communicative environment.

Information architecture expert Peter Morville explored this in his book “Ambient Findability.”  But he cautions that perhaps things may not be as rosy as you might think after drinking the Google X Kool-Aid. This excerpt is from a post he wrote on Ambient Findability:  “As information becomes increasingly disembodied and pervasive, we run the risk of losing our sense of wonder at the richness of human communication.”

And this brings us back to the Herbert Simon quote — knowing and thinking are not the same thing. Our brains were not built on the assumption that all the information we need is instantly accessible. And, if that does become the case through advances in technology, it’s not at all clear what the impact on our ability to think might be. Nicholas Carr, for one, believes that the Internet may have the long-term effect of actually making us less intelligent. And there’s empirical evidence he might be right.

In his book “Thinking, Fast and Slow,”Noble laureate Daniel Kahneman says that while we have the ability to make intuitive decisions in milliseconds (Malcolm Gladwell explored this in “Blink”), humans also have a nasty habit of using these “fast” mental shortcuts too often, relying on gut calls that are often wrong (or, at the very least, biased) when we should be using the more effortful “slow” and rational capabilities that tend to live in the frontal part of our brain. We rely on beliefs, instincts and habits, at the expense of thinking. Call it informational instant gratification.

Kahneman recounts a seminal study in psychology, where four-year-old children were given a choice: they could have one Oreo immediately, or wait 15 minutes (in a room with the offered Oreo in front of them, with no other distractions) and have two Oreos. About half of the children managed to wait the 15 minutes. But it was the follow-up study, where the researchers followed what happened to the children 10 to 15 years later, that yielded the fascinating finding:

“A large gap had opened between those who had resisted temptation and those who had not. The resisters had higher measures of executive control in cognitive tasks, and especially the ability to reallocate their attention effectively. As young adults, they were less likely to take drugs. A significant difference in intellectual aptitude emerged: the children who had shown more self-control as four year olds had substantially higher scores on tests of intelligence.”

If this is true for Oreos, might it also be true for information? If we become a society that expects to have all things at our fingertips, will we lose the “executive control” required to actually think about things? Wouldn’t it be ironic if Google, in fulfilling its mission to “organize the world’s information” inadvertently transgressed against its other mission, “don’t be evil,” by making us all attention-deficit, intellectual-diminished, morally bankrupt dough heads?

As We May Remember

First published January 12, 2012 in Mediapost’s Search Insider

In his famous Atlantic Monthly essay “As We May Think,” published in July 1945, Vannevar Bush forecast a mechanized extension to our memory that he called a “memex”:

Consider a future device for individual use, which is a sort of mechanized private file and library. It needs a name, and to coin one at random, “memex” will do. A memex is a device in which an individual stores all his books, records, and communications, and which is mechanized so that it may be consulted with exceeding speed and flexibility. It is an enlarged intimate supplement to his memory.

Last week, I asked you to ponder what our memories might become now that Google puts vast heaps of information just one click away. And ponder you did:

I have to ask, WHY do you state, “This throws a massive technological wrench into the machinery of our own memories,” inferring something negative??? Might this be a totally LIBERATING situation? – Rick Short, Indium Corporation

Perhaps, much like using dictionaries in grade school helped us to learn and remember new information, Google is doing the same? Each time we “google” and learn something new aren’t we actually adding to our knowledge base in some way? – Lester Bryant III

Finally, I ran across this. Our old friend Daniel Wegner (transactive memory) and colleagues Betsy Sparrow and Jenny Liu from Columbia University actually did research on this very topic this past year. It appears from the study that our brains are already adapting to having Internet search as a memory crutch. Participants were less likely to remember information they looked up online when they knew they could access it again at any time. Also, if they looked up information that they knew they could remember, they were less likely to remember where they found it. But if the information was determined to be difficult to remember, the participants were more likely to remember where they found it, so they could navigate there again.

The beautiful thing about our capacity to remember things is that it’s highly elastic. It’s not restricted to one type of information. It will naturally adapt to new challenges and requirements. As many rightly commented on last week’s column, the advent of Google may introduce an entirely new application of memory — one that unleashes our capabilities rather than restricts them. Let me give you an example.

If I had written last week’s column in 1987, before the age of Internet Search, I would have been very hesitant to use the references I did: the Transactive Memory Hypothesis of Daniel Wegner, and the scene from “Annie Hall.”  That’s because I couldn’t remember them that well. I knew (or thought I knew) what the general gist was, but I had to search them out to reacquaint myself with the specific details of each. I used Google in both cases, but I was already pretty sure that Wikipedia would have a good overview of transactive memory and that Youtube would have the clip in question. Sure enough, both those destinations topped the results that Google brought back. So, my search for transactive memory utilized my own transactive memorizations. The same was true, by the way, for my reference to Vannevar Bush at the opening of this column.

By knowing what type of information I was likely to find, and where I was likely to find it, I could check the references to ensure they were relevant and summarize what I quickly researched in order to make my point. All I had to do was remember high-level summations of concepts, rather than the level of detail required to use them in a meaningful manner.

One of my favorite concepts is the idea of consilience – literally, the “jumping together” of knowledge. I believe one of the greatest gifts of the digitization of information is the driving of consilience. We can now “graze” across multiple disciplines without having to dive too deep in any one, and pull together something useful — and occasionally amazing. Deep dives are now possible “on demand.” Might our memories adapt to become consilience orchestrators, able to quickly sift through the sum of our experience and gather together relevant scraps of memory to form the framework of new thoughts and approaches?

I hope so, because I find this potential quite amazing.

Risk, Reward and the Buying Matrix

First published December 23, 2010 in Mediapost’s Search Insider

Last week, I explored how two parts of our brain, the nucleus accumbens and the anterior insula, are key in driving our buying behaviors. I compared them to the gas pedal and brake of our buying “engine.” The balance between the two is key to understanding how we are driven towards our ultimate decisions. The nucleus accumbens drives our anticipation of an emotional reward, and the anterior insula creates anxiety around areas of risk.

As it turns out, you can plot the two as the axes of a matrix on which, theoretically, you could plot any purchase. The four quadrants would be, starting in the lower left and going clockwise: low risk/low reward,  low risk/high reward, high risk/high reward and, finally, high risk/low reward. Let’s take a deeper dive in each quadrant to see what kind of purchases fall into each.

Low Risk/Low Reward

This is the stuff of everyday life. If you’re a “to-do” list kind of person, these types of purchases would probably be on that list. Think of household supplies like toilet paper and laundry detergent, or the milk, dry goods, etc. that make up a large percentage of your grocery list. This is the world of consumer packaged goods. The only real exceptions are those products that represent personal indulgences, like a steak or your favorite premium ice cream.

There is a huge piece of the B2B market that falls into this category as well: office  and industrial supplies, parts and other often-purchased items.

There is no gas pedal and no brake on these purchases. While the low prices remove any real risk, these are also not the types of shopping trips you look forward to all day. You simply have to get them done. This means the personal engagement with the actual act of purchasing will be minimal. Here, we are creatures of habit. We go to the same places to buy the same things because we really don’t want to invest any more time than is necessary to get the job done. If you compete in this space, you have one strategy and one strategy only: provide the fastest and easiest path to purchase.

Low Risk/High Reward

Here, we have our little indulgences; the day-to-day treats that make life worth living. The entire premium consumer product industry lives squarely in this quadrant: premium desserts, pre-made meals, beauty care products, wines, craft beers and, moving into slightly greater degrees of risk, clothes, accessories, shoes, costume jewelry and electronic gadgets.  This is also where you’d find CDs, DVDs and books. It’s in this quadrant where Amazon rules.

These purchases are all gas and little brake.  If you ever make a purchase on impulse, it’s almost guaranteed to fall into this part of the behavioral matrix.  When women plan shopping trips, it’s to indulge their reward center with these types of purchases. But men are also vulnerable to the siren call of the indulgent purchase: gadgets, tools, sporting goods, electronic games — and, for the metro-men amongst us, clothes and accessories. By the way, manicures, pedicures and spa visits all qualify, along with movies, concerts and dining out.

This quadrant is particularly timely this time of year, because when you buy a gift for someone, you hope you’ve hit this quadrant. The tough part is knowing your recipients well enough to figure out what will kick their nucleus accumbens into high gear.

While the degree of risk doesn’t merit a lot of intensive research, here the buying can be as much fun as the owning, which generally means a higher degree of engagement on the part of the buyer. Shopping environments that enhance the reward part of the equation will be attractive. Buyers are susceptible to suggestion, especially if it comes through our social connections. And brand affinities are powerful here.

In my next column, I’ll provide some examples of the other two quadrants to see what kind of purchases fall into each. Then, we’ll see how each of these buying scenarios might map on the online consumer landscape.