The Psychology of Usefulness: The Acceptance of Technology – Part Three

In Part Two of this series, I looked at Davis and Bagozzi’s Technology Acceptance Model, first proposed in 1989.

Technology_Acceptance_Model

As I said, while the model was elegant and parsimonious, it seems to simplify the realities of technology acceptance decisions too much. In 2000, Venkatesh and Davis tried to deal with this in TAM 2 – the second version of the Technology Acceptance Model.

TAM2

In this version, they added several determinants of Perceived Usefulness and demoted Perceived Ease of Use to being just one of the factors that impacted Perceived Usefulness.  Impacting this mental calculation were two mediating factors: Experience and Voluntariness. This rebalancing of factors provides some interesting insights into the mental process we go through when making a decision whether we’ll accept a new technology or not.

Let’s begin with the determinants of Perceived Usefulness in the order they appear in Venkatesh and Davis’s model:

Subjective Norm: TAM 2 resurrects one of the key components of the original Theory of Reasoned Action model – the opinions of others in your social environment.

Image: Venkatesh and Davis also included another social factor in their list of determinants – how would the acceptance of this technology impact your status in your social network? Notice that our calculation of the image enhancement potential has the Subjective Norm as an input. It’s a Bayesian prediction – we start with our perceived social image status (the prior) and adjust it based on new information, in this case the acceptance of a new technology.

Job Relevance: How applicable is the technology to the job you have to do?

Output Quality: How will this technology impact your ability to perform your job well?

Result Demonstrability: How easy is it to show the benefits of accepting the technology?

It’s interesting to note how these factors split: the first two (subjective norm and image) being related to social networks, the next two (Job Relevance and Output Quality) being part of a mental calculation of benefit and the last one, Demonstrability, bridging the two categories: How easy will it be to show others that I made the right decision?

According to the TAM 2 model, we use these factors, which combine practical task performance considerations and social status aspirations, into a rough calculation of the perceived usefulness of a technology. After this is done, we start balancing that with how easy we perceive the new technology to be to use. Venkatesh and Davis commented on this and felt that Perceived Ease of Use has a variable influence in two areas, the forming of an attitude towards the technology and a behavioral intention to use the technology. The first is pretty straight forward. Our attitude is our mental frame regarding the technology. Again, to use a Bayesian term, it’s our prior. If the attitude is positive, it’s very probably that we’ll form a behavioral intention to use the technology. But there are a few mediating factors at this point, so let’s take a closer look at the creation of Behavioral Intention..

In forming our intention, Perceived Ease of Use is just one of the determinants we use in our “Usefulness” calculation, according to the model. And it depends on a few things. It depends on efficacy – how comfortable we judge ourselves to be with the technology in question. It also depends on what resources we feel we will have access to to help us up the learning curve. But, in the forming of our attitude (and thereby our intention), Venkatesh and Davis felt that Perceived Usefulness will typically be more important than Perceived Ease of Use. If we feel a technology will bring a big enough reward, we will be willing to put up with a significant degree of pain. At least, we will in what we intend to do. It’s like making a New Year’s Resolution to lose weight. At the time we form the intention, the pain involved is sometime in the future, so we go forward with the best of intentions.

As we move forward from Attitude to Intention, this transition if further mediated in the model by our subjective norm – the cognitive context we place the decision in. Into this subjective norm falls our experience (our own evaluation of our efficacy), the attitudes of others towards the technology and also the “Voluntariness” of the acceptance. Obviously, our intention to use will be stronger if it’s a non-negotiable corporate mandate, as opposed to a low priority choice we have the latitude to make.

What is missing from the TAM 2 model is the link between Perceived Ease of Use and actual Usage. Just like a New Year’s Resolution, intentions don’t always become actions. Venkatesh and Davis said Perceived Ease of Use is a moving, iteratively updated calculation. As we gain hands-on experience, we update our original estimate of Ease of Use, either positively or negatively. If it’s positive, it’s more likely that Intention will become Usage. If negatively, the technology may fail to become accepted. In fact, I would say this feedback loop is an ongoing process that may repeat several times in the space between Intention and Usage. The model, with a single arrow going in one direction from Intention to Usage, belies the complexity of what is happening here.

Venkatesh and Davis wanted to create a more realistic model, expanding the front end of the model to account for determinants going into the creation of Intention. They also wanted to provide a model of the decision process that better represented how we balance Perceived Usefulness and Perceived Ease of Use. I think they made some significant gains here. But the model is still a linear one – going in one direction only. What they missed is the iterative nature of acceptance decisions, especially in the gap between Intention and Behavior.

In Part Four, we’ll look at TAM 3 and see how Venkatesh further modified his model to bring it closer to the real world.

So, Six Seconds is the Secret, Huh?

First published February 13, 2014 in Mediapost’s Search Insider

oreo-superbowl-blackout-adApparently, the new official time limit for customer engagement is 6 seconds, according to a recent post on Real Time Marketing. How did we come up with 6? Well, in the world of social media engagement it seemed like a good number and no one has called bull shit on it yet, so 6 it is

Marketers love to talk about time – just in time, real time, right time. At the root of all this “time talk” is the realization that customers really don’t have any time for us, so we have to somehow jam our messages into the tiny little cracks that may appear in the wall of willful ignorance they carefully build against marketing. The marketer’s goal is to erode their defenses by looking for any weakness that may appear.

Look at the supposed poster child for Real Time Marketing – the Oreo coup staged during the black out in the 2013 Super Bowl. Because the messaging was surprising and clever, and because, let’s face it, we weren’t doing much of anything else anyway, Oreo managed to gain a foothold in our collective consciousness for a few precious seconds. So, marketers being marketers, we all stumbled over ourselves to proclaim a new channel and launch a series of new micro-attacks on consumers. That’s where the 6 seconds came from. Apparently, that’s the secret to storming the walls. Five seconds and you’re golden. Seven seconds and you’re dead.

Oreo surprised us, and it wasn’t because the message was 6 seconds long. It was because we weren’t expecting a highly relevant, highly timely message. Humans are built to respond to things that don’t fit within our expected patterns. The whole approach of marketing is to constantly blanket us with untimely, irrelevant messages. Marketers, to be fair, try to deliver the right message at the right time to the right person, but it’s really hard to do that. So, we overcompensate by delivering lots of messages all the time to everyone, hoping to get lucky. Not to take anything away from the cleverness and nimbleness of the Oreo campaign, but they got lucky. We were surprised and we let our defenses down long enough to be amused and entertained. Real time marketing wasn’t a brilliant new channel; it was a shot in the dark – literally.

And there’s no six-second gold standard of engagement. If you can deliver the right message at the right time to the right person, you can spend hours talking to your prospective customer.  It’s only when you’re trying to interrupt someone with something irrelevant that you have to hopefully shoehorn it into their consciousness. Think of it like a Maslow’s hierarchy of advertising effectiveness.  At it’s best advertising should be useful. This sits at the top of the pyramid. After usefulness comes relevance – even if I don’t find the ad useful to me right now, at least you’re talking to the right person. After relevance comes entertainment – I’ll willingly give you a few seconds of my time if I find your message amusing or emotionally engaging.  I may not buy, but I’ll spend some time with you. After entertainment comes the category the majority of advertising falls into – a total waste of my time.  Not useful, irrelevant, not emotionally engaging. And making an ad that falls into this category 5 seconds long, no matter what channel it’s delivered through, won’t change that. You may fool me once, but next time, I’m still going to ignore you.

There was something important happening during the Oreo campaign at the 2013 Super Bowl, but it had nothing to do with some new magic formula, some recently discovered loophole in our cognitive defenses. It was a sign of what may, hopefully, emerge as trend in advertising – nimble, responsive marketing that establishes a true feedback loop with prospects. What may have happened when the lights went out in New Orleans is that we may have found a new, very potent way to make sense of our market and establish a truly interactive, responsive dialogue with them. If this is the case, we may have just found a way climb a rung or two on the Advertising Effectiveness Hierarchy.

How Can Humans Co-Exist with Data?

First published February 6, 2014 in Mediapost’s Search Insider

tumblr_inline_mpt49sqAwV1qz4rgpLast week, I talked about our ability to ignore data. I positioned this as a bad thing. But Pete Austin called me on it, with an excellent counterpoint:

Ignoring Data is the most important thing we do. Only the people who could ignore the trees and see the tiger, in real-time, survived to become our ancestors.”

Too true. We’re built to subconsciously filter and ignore vast amounts of input data in order to maintain focus on critical tasks, such as avoiding hungry tigers. If you really want to dive into this, I would highly recommend Daniel Simons and Christopher Chabris’s “The Invisible Gorilla.” But, as Simons and Chabris point out, with example after example of how our intuitions (which we use as filters) can mislead us, this “inattentional blindness” is not always a good thing. In the adaptive environment in which we evolved, it was pretty effective at keeping us alive.  But in a modern, rational environment, it can severely inhibit our ability to maintain an objective view of the world.

But Pete also had a second, even more valid point:

“What you need to concentrate on now is “curated data”, where the junk has already been ignored for you.”

And this brought to mind an excellent example from a recent interview I did as background for an upcoming book I’m working on.  This idea of pre-filtered, curated data becomes a key consideration in this new world of Big Data.

Nowhere are the stakes higher for the use of data than in healthcare. It’s what lead to the publication of a manifesto in 1992 calling for a revolution in how doctors made life and death decisions. One of the authors, Dr. Gordon Guyatt, coined the term “Evidence based medicine.” The rational is simple here. By taking an empirical approach to not just diagnosis but also to the best prescriptive path, doctors can rise above the limitations of their own intuition and achieve higher accuracy. It’s data driven decision-making, applied to health care. Makes perfect sense, right? But even though Evidence based medicine is now over 20 years old, it’s still difficult to consistently apply at the doctor to individual patient level.

I had the chance to ask Dr. Guyatt why this was:

“Essentially after medical school, learning the practice of medicine is an apprenticeship exercise and people adopt practice patterns according to the physicians who are teaching them and their role models and there is still a relatively small number of physicians who really do good evidence-based practice themselves in terms of knowing the evidence behind what they’re doing and being able to look at it critically.”

The fact is, a data driven approach to any decision-making domain that previously used to rely on intuition just doesn’t feel – well – very intuitive. It’s hard work. It’s time consuming. It, to Mr. Austin’s point, runs directly counter to our tiger-avoidance instincts.

Dr. Guyatt confirms that physicians are not immune to this human reliance on instinct:

“Even the best folks are not going to do it – maybe the best folks – but most folks are not going to be able to do that very often.”

The answer in healthcare, and likely the answer everywhere else where data should back up intuition, is the creation of solid data based resources, which adhere to empirical best practices without requiring every single practitioner to do the necessary heavy lifting. Dr. Guyatt has seen exactly this trend emerge in the last decade:

“What you need is preprocessed information. People have to be able to identify good preprocessed evidence-based resources where the people producing the resources have gone through that process well.”

The promise of curated, preprocessed data is looming large in the world of marketing. The challenge is that, unlike medicine, where data is commonly shared and archived, in the world of marketing much of the most important data stays proprietary. What we have to start thinking about is a truly empirical, scientific way to curate, analyze and filter our own data for internal consumption, so it can be readily applied in real world situations without falling victim to human bias.

The Psychology of Usefulness: The Acceptance of Technology – Part Two

In my last post, I talked about how the Theory of Reasoned Action and the original Technology Acceptance Model tried to predict both intention and usage of new technologies. As a quick recap, let’s look again at Davis and Bagozzi’s original model.

Technology_Acceptance_Model

In aiming for the simplest model possible, there was significant conflation applied to the front end of the model – with just one box representing external variables, which then led to two similarly conflated boxes: Perceived Usefulness and Perceived Ease of Use. While this simplification was admirable in the quest for parsimony, in real world situations it seemed like it went too far in this direction. There was a lot happening between the three boxes at the front of the model that demanded closer examination.

Davis indicated that there was an interesting relationship between Perceived Usefulness and Perceived Ease of Use. One of the mechanisms at play that has to be understood is self-efficacy. In understanding adoption of technology, self-efficacy is a key factor. Essential, it means that the easier a system is to use, the greater the user’s sense of efficacy. They believe they have control over what they are doing. And control, especially on a work context, is a strong motivational driver. There is an extensive body of work exploring the psychological importance of control. If we feel we’re in control, we also feel empowered to mitigate risk. The concept of self-efficacy helps to highlight the importance of the Perceived Ease of Use box. But what about the other box: Perceived Usefulness?

Davis, in his accompanying notes and research, indicated that Perceived Usefulness is a stronger indicator of intention than Perceived Ease of Use. In other words, we are willing to put up with some pain to learn a new technology if we feel it will offer a significant improvement in our ability to complete a task. This balancing equation requires two heuristic evaluations on the part of the user: the allocation of cognitive resources required to gain proficiency and the expected usefulness of the tool once proficiency is gained. This is exactly the same equation used in Charnov’s Marginal Value Theorem, applied in a different context. In optimal foraging, we (and all animals who forage) balance expenditure of resources required to reach a food patch against the expected food value to be derived from that patch. In technology adoption, we balance expenditure of resources required to master a new technology against the increased usefulness that technology offers.

In this heuristic evaluation, there are four key marketing lessons for anyone who’s business model relies on the adoption of new technology:

1)   Lessen the intimidation of the learning curve. Persuade the user (and this is a key point that I’ll return to in in point 4) that this is a reasonable investment of resources. Build a sense of perceived ease of use. Provide visible links to intuitive learning resources. Often, marketers overplay the feature benefits of their products to show how powerful they are. But, as they’re doing this, they fail to realize that this upsets the balance between perceived usefulness and perceived ease of use.

2)   Provide clear examples of perceived usefulness in terms that are immediately relevant. Remember, this is the key factor in the equation the prospect is trying to balance. The more salient you can make the perceived usefulness, the more likely the user is to adopt it, even if a learning curve is present. Ideally, get that usefulness across with very specific, industry relevant examples that allow the user to visualize usage of the technology.

3)   Remember that the user is balancing the two factors. Ease of use is great, but it can’t come at the expense of overall usefulness. In fact, in calculating the right balance (which should be done with extensive testing feedback from target customers) it should offer a significant gain in usefulness (as measured against any incumbent technologies) with a relatively manageable investment of resources.

4)   Remember that you’re talking to a user. When trying to strike the right balance, remember that you’ll probably be talking to different people as the decision progresses. For the user, the right balance between perceived usefulness and perceived ease of use must be struck. But at some point, you’ll be talking to a buyer, not a user, before the sale actually is closed. This would be one of those external variables that fall outside the scope of the Technology Adoption Model. This switching of roles from “doers” to “buyers” is dealt with extensively in my book, The BuyerSphere Project.

In the next post, I’ll talk about how the Technology Acceptance Model has been modified over the past 2 decades so it better reflects real world decision making.

The Psychology of Usefulness: The Acceptance of Technology – Part One

oldpeopletech7_317161In the last post, I talked about what it takes to break a habit built around an online tool, website or application. In today’s post, I want to talk about what happens when we decide to replace that functional aid, whatever it might be.

So, as I said last time, the biggest factor contributing to the breakdown of habit is the resetting of our expectation of what is an acceptable outcome. If our current tools no longer meet this expectation, then we start shopping for a new alternative. In marketing terms, this would be the triggering of need.

Now, this breakdown of expectation can play out in one of two ways. First, if we’re not aware of an alternative solution, we may just feel an accumulation of frustration and dissatisfaction with our current tools. This build up of frustration can create a foundation for further “usefulness foraging” but generally isn’t enough by itself to trigger action. This lends support to my hypothesis that we’re borrowing the evolved Marginal Value algorithm to help us judge the usefulness of our current tools. To put it in biological terms we’re more familiar with, “A bird in the hand is worth two in the bush.” You don’t leave a food patch unless: A) you are reasonably sure there’s another, more promising, patch that can be reached with acceptable effort or B) you have completely exhausted the food available in the patch you’re in. I believe the same is true for usefulness. We don’t throw out what we have until we either know there’s an acceptable alternative that promises a worthwhile increase in usefulness or our current tool is completely useless. Until then, we put up with the frustration.

The Technology Acceptance Model

Let’s say that we have decided that it’s worth the effort to find an alternative. What are the mechanisms we use to find the best alternative? Fred Davis and Richard Bagozzi tackled that question in 1989 and came up with the first version of their Technology Acceptance Model. It took the Theory of Reasoned Action, developed by Martin Fishbein and Icek Ajzen, put forward a decade earlier (1975, 1980) and tried to apply it to the adoption of a new technology. They also relied on the work Everett Rogers did in the diffusion of technology.

First of all, like all models, the TAM had to make some assumptions to simplify real world decisions down to a theoretical model. And, in doing so, it has required a number of revisions to try to bring it closer to what technology adoption decisions look like in the real world.

Let’s start with the foundation of the Theory of Reasoned Action. In it’s simplest form, the TRA says that voluntary behavior is predicted by an individual’s attitude towards that behavior and how they think others would think of them if they performed that behavior.

TRA

So, let’s take the theory for a test drive – if you believe that exercising will increase your health and you also believe that others in your social circle will applaud you for exercising, you’ll exercise. With this example, I think you begin to see where the original TRA may run into problems. Even with the best of intentions, we may not actually make it to the gym. Fishbein and Ajzen’s goal was to create an elegant, parsimonious model that would reliably predict both behaviors and intentions, creating a distinction between the two. Were they successful?

In a meta-analysis of TRA, Sheppard et al (1988) found that attitude was a fairly accurate predictor of intention. If you believe going to the gym is a good thing, you will probably intend to go to the gym. The model didn’t do quite as good a job in predicting behavior. Even if you did intend to go to the gym, would you actually go?

The successful progression from intention to behavior seemed to be reliant on several real world factors, including the time between intention and action (the longer the time interval, the more the degree of erosion of intention) and also lack of control. For example, in the gym example, what if your gym suddenly increased it’s membership fees, or a sudden snowstorm made it difficult to drive there.

Also, if you were choosing from a set of clear alternatives and had to choose one, TRA did a pretty good job of predicting behaviors. But if alternatives were undetermined, or there were other variables to consider, then the predictive accuracy of TRA dropped significantly.

Let me offer an example of how TRA might not work very well in a real world setting. In my book, The BuyerSphere Project, I spent a lot of time looking at the decision process in B2B buying scenarios. If we used the TRA model, we could say that if a buyer had to choose between 4 different software programs for their company, we could use their attitudes towards each of the respective programs as well as the aggregated (and weighted  – because not every opinion should carry the same weight) attitudes towards these programs of the buyer’s co-workers, peers and bosses to determine their intention. And once we have their intention, that should lead to behavior.

But in this scenario, let’s look at some of the simplifying assumptions we’ve had to make to try to cram a real world scenario into the Fishbein Azjen model:

  • We assume a purchase will have to be made from one of the four alternatives. In a real world situation, the company may well decide to stick with what they have
  • We assume the four choices will remain static and we won’t get a new candidate out of left field
  • We assume that attitudes towards each of the alternatives will remain static through the behavioral interval and won’t change. This almost never happens in B2B buying scenarios
  • We assume the buyer – or rational agent – will be in full control of their behaviors and the ultimate decision. Again, this is rarely the case in B2B buying decisions.
  • We assume that there won’t be some mitigating factor that arises in between intention and behavior – for example a spending freeze or a change in requirements.

As you can see, in trying to create a parsimonious model, Fishbein and Azjen ran into a common trap – they had to simplify to the point where it failed to work consistently in the real world.

But, in this review by Alice Darnell, she pointed out Sheppard’s main criticism of the TRA model:

Sheppard et al. (1988) also addressed the model’s main limitation, which is that it fails to account for behavioural outcomes which are only partly under the individual’s volitional control.

I’ve added bolding to the word volitional on purpose. I’ve highlighted many external factors that may lie beyond the volitional control of the individual, but I think the biggest limitation of the TRA lies in its name: Theory of Reasoned Action. It assumes that reason drives our intentions and behaviors. It doesn’t account for emotion.

Applying Reasoned Action to Technology Acceptance

Now, let’s see how Rogers and Bagozzi took Fishbein and Azien’s foundational work and applied it to the acceptance of new technologies.

In their first model (1989) they took attitudes and subjective norms (the attitudes of others) and adapted them for a more applied activity, the use of a new technological tool. They came up with two attitude drivers: Perceived Usefulness and Perceived Ease of Use. If you think back to Charnov’s Marginal Value Theorem, this is exactly the same risk/reward mechanism at work here.  In foraging, it would be yield of food over perceived required effort. In Technology Acceptance, Perceived Usefulness is the reward and Perceived Ease of Use is the risk to be calculated. In the mental calculation, Rogers and Bagozzi assume the user would do a quick mental calculation, using their own knowledge and the knowledge of others to come up with a Usefulness/Ease value that would create their attitude towards using.  This then becomes their Behavioral Intention to Use – which should lead to Actual System Use.

tam

The TAM model was clean and parsimonious. There was just one problem. It didn’t do a very good job of predicting usage in real world situations. There seemed to be much more at work here in actual decisions to accept technologies. In the next post, we’ll look at how the TAM model was modified to bring it closer to real behaviors.

Never Underestimate the Human Ability to Ignore Data

First published January 30, 2014 in Mediapost’s Search Insider

ignore_factsIt’s one thing to have data. It’s another to pay attention to it.

We marketers are stumbling over ourselves to move to data-driven marketing. No one would say that’s a bad thing. But here’s the catch in that. Data driven marketing is all well and good when it’s a small stakes game – optimizing spend, targeting, conversion rates, etc. If we gain a point or two on the topside, so much the better. And if we screw up and lose a point or two – well – mistakes happen and as long as we fix it quickly, no permanent harm done.

But what if the data is telling us something we don’t want to know? I mean – something we really don’t want to know. For instance, our brand messaging is complete BS in the eyes of our target market, or they feel our products suck, or our primary revenue source appears to be drying up or our entire strategic direction looks to be heading over a cliff? What then?

This reminds me of a certain CMO of my acquaintance who was a “Numbers Guy.” In actual fact, he was a numbers guy only if the numbers said what he wanted them to say. If not, then he’d ask for a different set of numbers that confirmed his view of the world. This data hypocrisy generated a tremendous amount of bogus activity in his team, as they ran around grabbing numbers out of the air and massaging them to keep their boss happy. I call this quantifiable bullshit.

I think this is why data tends to be used to optimize tactics, but why it’s much more difficult to use data to inform strategy. The stakes are much higher and even if the data is providing clear predictive signals, it may be predicting a future we’d rather not accept. Then we fall back on our default human defense: ignore, ignore, ignore.

Let me give you an example. Any human who functions even slightly above the level of brain dead has to accept the data that says our climate is changing. The signals couldn’t be clearer. And if we choose to pay attention to the data, the future looks pretty damn scary. Best-case scenario – we’re probably screwing up the planet for our children and grand children. Worst-case scenario – we’re definitely screwing up the planet and it will happen in our lifetime. And we’re not talking about an increased risk of sunburn. We’re talking about the potential end of our species. So what do we do? We ignore it. Even when flooding, drought and ice storms without historic precedent are happening in our back yards. Even when Atlanta is paralyzed by a freak winter storm. Nothing about what is happening is good news, and it’s going to get worse. So, damn the data, let’s just look the other way.

In a recent poll by the Wall Street Journal, out of a list of 15 things that Americans believed should be top priorities for President Obama and Congress, climate change came out dead last – behind pension reform, Iran’s nuclear program and immigration legislation. Yet, if we look at the data that the UN and the World Economic Forum collects, quantifying the biggest threats to our existence, climate change is consistently near the top, both in terms of likelihood and impact. But, it’s really hard to do something about it. It’s a story we don’t want to hear, so we just ignore the data, like the afore-said CMO.

As we get access to more and more data, it will be harder and harder to remain uninformed, but I suspect it will have little impact on our ability to be ignorant. If we don’t know something, we don’t know it. But if we can know something, and we choose not to, that’s a completely different matter. That’s embracing ignorance. And that’s dangerous. In fact, it could be deadly.

Who Owns Your Data (and Who Should?)

First published January 23, 2104 in Mediapost’s Search Insider

Lock backgroundLast week, I talked about a backlash to wearable technology. Simon Jones, in his comment, pointed to a recent post where he raised the very pertinent point – your personal data has value. Today, I’d like to explore this further.

I think we’re all on the same page when we say there is a tidal wave of data that will be created in the coming decade. We use apps – which create data. We use/wear various connected personal devices – which create data. We go to online destinations – which create data. We interact with an ever-increasing number of wired “things” – which create data. We interact socially through digital channels – which create data.  We entertain ourselves with online content – which creates data. We visit a doctor and have some tests done – which creates data. We buy things, both online and off, and these actions also create data. Pretty much anything we do now, wherever we do it, leaves a data trail. And some of that data, indeed, much of it, can be intensely personal.

As I said some weeks ago, all this data is creating a eco-system that is rapidly multiplying and, in its current state, is incredibly fractured and chaotic. But, as Simon Jones rightly points out, there is significant value in that data. Marketers will pay handsomely to have access to it.

But what, or whom, will bring order to this chaotic and emerging market? The value of the data compounds quickly when it’s aggregated, filtered, cross-tabulated for correlations and then analyzed. As I said before, the captured data is its fragmented state is akin to a natural resource. To get to a more usable end state, you need to add a value layer on top of it. This value layer will provide the required additional steps to extract the full worth of that data.

So, to retrace my logic, data has value, even in it’s raw state. Data also has significant privacy implications. And right now, it’s not really clear who owns what data. To move forward into a data market that we can live with, I think we need to set some basic ground rules.

First of all, most of us who are generating data have implicitly agreed to a quid pro quo arrangement – we’ll let you collect data from us if we get an acceptable exchange of something we value. This could be functionality, monetary compensation (usually in the form of discounts and rewards), social connections or entertainment. But here’s the thing about that arrangement – up to now, we really haven’t quantified the value of our personal data. And I think it’s time we did that. We may be trading away too much for much too little.

To this point we haven’t worried much about what we traded off and to whom because any data trails we left have been so fragmented and specific to one context, But, as that data gains more depth and, more importantly, as it combines with other fragments to provide much more information about who we are, what we do, where we go, who we connect with, what we value and how we think, it becomes more and more valuable. It represents an asset for those marketers who want to persuade us, but more critically, that data -our digital DNA – becomes vitally important to us. In it lays the quantifiable footprint of our lives and, like all data, it can yield insights we may never gain elsewhere. In the right hands, it could pinpoint critical weaknesses in our behavioral patterns, red flags in our lifestyle that could develop into future health crises, financial opportunities and traps and ways to allocate time and resources more efficiently. As the digitally connected world becomes denser, deeper and more functional, that data profile will act as our key to it. All the potential of a new fully wired world will rely on our data.

There are millions of corporations that are more than happy to warehouse their respective data profiles of you and sell it back to you on demand as you need it to access their services or tools.  They will also be happy to sell it to anyone else who may need it for their own purposes. Privacy issues aside (at this point, data is commonly aggregated and anonymized) a more fundamental question remains – whose data is this? Whose data should it be? Is this the reward they reap for harvesting the data? Or because this represents you, should it remain your property, with you deciding who uses it and for what?

This represents a slippery slope we may already be starting down.  And, if you believe this is your data and should remain so, it also marks a significant change from what’s currently happening. Remember, the value is not really in the fragments. It’s in bringing it together to create a picture of who you are. And we should be asking the question – who should have the right to create that picture of you – you – or a corporate data marketplace that exists beyond your control ?

The Psychology of Usefulness: How Online Habits are Broken

google-searchLast post, I talked about how Google became a habit – Google being the most extreme case of online loyalty based on functionality I could think of. But here’s the thing with functionally based loyalty – it’s very fickle. In the last post I explained how Charnov’s Marginal Value Theorem dictates how long animals spend foraging in a patch before moving on to the next one. I suspect the same principles apply to our judging of usefulness. We only stay loyal to functionality as long as we believe there are no more functional alternatives available to us for an acceptable investment of effort. If that functionality has become automated in the form of a habit, we may stick with it a little longer, simply because it takes our rational brain awhile to figure out there may be better options, but sooner or later it will blow the whistle and we’ll start exploring our options. Charnov’s internal algorithm will tell us it’s time to move on to the next functional “patch.”

Habits break down when there’s a shift if one of the three prerequisites: frequency, stability or acceptable outcomes.

If we stop doing something on a frequent basis, the habit will slowly decay. But because habits tend to be stored at the limbic level (in the basal ganglia), they prove to be remarkably durable. There’s a reason we say old habits die hard. Even after a long hiatus we find that habits can easily kick back in. Reduction of frequency is probably the least effective way to break a habit.

A more common cause of habitual disruption is a change in stability. Suddenly, if something significant changes in our task environment, our  “habit scripts” start running into obstacles. Think about the last time you did a significant upgrade to a program or application you use all the time. If menu options or paths to common functions change, you find yourself constantly getting frustrated because things aren’t where you expect them to be. Your habit scripts aren’t working for you anymore and you are being forced to think. That feeling of frustration is how the brain protects habits and shows how powerful our neural energy saving mode is. But, even if the task environment becomes unstable for a time, chances are the instability is temporary. The brain will soon reset its habits and we’ll be back plugging subconsciously away at our tasks. Instability does break a habit, but it just rebuilds a new one to take its place.

A more permanent form of habit disruption comes when outcomes are no longer acceptable. The brain hates these types of disruptions, because it knows that finding an alternative could require a significant investment of effort. It basically puts us back at square one. The amount of investment required is dependent on a number of things, including the scope of change required (is it just one aspect of a multi-step task or the entire procedure?), current awareness of acceptable alternatives (is a better solution near at hand or do we have to find it?), the learning curve involved (how different is the alternative from what we’re used to using), are there other adoption requirements (do we have to make an investment of resources – including time and/or money?) and how much down time will be involved in order to adopt the alternative. All these questions are the complexities that can be factors in the Marginal Value Theorem.

Now, let’s look at how each of these potential habit breakers applies to Google. First of all, frequency probably won’t be a factor because we will search more, not less, in the future.

Stability may be a more likely cause. The fact is, the act of online searching hasn’t really changed that much in the last 20 years. We still type in a query and get a list of results. If you look at Google circa 1998, it looks a little clunky and amateurish next to today’s results page, but given that 16 years have come and gone, the biggest surprise is that the search interface hasn’t changed more than it has.

Google now and then

A big reason for this is to maintain stability in the interface, so habits aren’t disrupted. The search page relies on ease of information foraging, so it’s probably the most tested piece of online real estate in history. Every pixel of what you see on Google, and, to a lesser extent, it’s competitors, has been exhaustively tested.

That has been true in the past but because of the third factor, acceptability of outcomes, it’s not likely to remain true in the future. We are now in the age of the app. Searching used to be a discrete function that was just one step of many required to complete a task. We were content to go to a search engine, retrieve information and then use that information elsewhere with other tools or applications. In our minds, we had separate chunks of online functionality that we would assemble as required to meet our end goal.

Let me give you an example. Let’s imagine we’re going to London for a vacation. In order to complete the end goal – booking flights, hotels and whatever else is required – we know we will probably have to go to many different travel sites, look up different types of information and undertake a number of actions. We expect that this will be the best path to take to our end goal. Each chunk of this “master task” may in turn be broken down into separate sub tasks. Along the way, we’ll be relying on those tools that we’re aware of and a number of stored procedures that have proven successful in the past. At the sub-task level, it’s entirely possible that some of those actions have been encoded as habits. For an example of how these tasks and stored procedures would play out in a typical search, see my previous post, A Cognitive Walkthrough of Searching.

But we have to remember that the only reason the brain is willing to go to all this work is that it believes it’s the most efficient route available to it. If there were a better alternative that would produce an acceptable outcome, the brain would take it. Our expectation of what an acceptable outcome would be would be altered, and our Marginal Value algorithm would be reset.

Up to now, functionality and information didn’t intersect too often online. There were places we went to get information, and there were places we went to do things. But from this point forward, expect those two aspects of online to overlap more and more often. Apps will retrieve information and integrate it with usefulness. The travel aggregator sites like Kayak and Expedia are an early example of this. They retrieve pricing information from vendors, user content from review sites and even some destination related information from travel sites. This ups the game in terms of what we expect from online functionality when we book a trip. Our expectation has been reset because Kayak offers a more efficient way to book travel than using search engines and independent vendor sites. That’s why we don’t immediately go to Google when we’re planning a trip.

Let’s fast-forward a few years to see how our expectations could be reset in the future. I suspect we’re not too far away from having an app where our travel preferences have been preset. This proposed app would know how we like to travel and the things we like to do when we’re on vacation. It would know the types of restaurants we like, the attractions we visit, the activities we typically do, the types of accommodation we tend to book, etc.  It would also know the sources we tend to use when qualifying our options (i.e. TripAdvisor). If we had such an app, we would simply put in the bare details of our proposed trip: departure and return dates, proposed destinations and an approximate itinerary. It would then go and assemble suggestions based on our preferences, all in one location. Booking would require a simple click, because our payment and personal information would be stored in the app. There would be no discrete steps, no hopping back and forth between sites, no cutting and pasting of information, no filling out forms with the same information multiple times. After confirmation, the entire trip and all required information would be made available on your mobile device.  And even after the initial booking, the app would continue to comb the internet for new suggestions, reviews or events that you might be interested in attending.

This “mega-app” would take the best of Kayak, TripAdvisor, Yelp, TripIt and many other sites and combine it all in one place. If you love travel as much as I do, you couldn’t wait to get your hands on such an app. And the minute you did, your brain would have reset it’s idea of what an acceptable outcome would be. There would be a cascade of broken habits and discarded procedures.

This integration of functionality and information foraging is where the web will go next. Over the next 10 years, usefulness will become the new benchmark for online loyalty. As this happens, our expectation set points will be changed over and over again. And this, more than anything, will be what impacts user loyalty in the future. This changing of expectations is the single biggest threat that Google faces.

In the next post I’ll look at what happens when our expectations get reset and we have to look at adopting a new technology.

The Inevitable Wearable Technology Backlash

First published January 16, 2014 in Mediapost’s Search Insider

piem-1024x705Okay, I’ve gone on record – I think wearable technology is a huge disruptive wave currently bearing down on us. Accept it.

And I’ve also said that stupid wearable technology is inevitable. Accept that as well.

It appears that this dam is beginning to burst.

Catharine Taylor had a humorous and totally on-point reaction to the “tech-togs” that were unveiled at CES. Her take: “Thanks but no thanks”

Maarten Albarda a similar reaction to his first go-around with Google Glass – “Huh?”

Look – don’t get me wrong. Wearable technology, together with the “web of everything,” will eventually change our lives, but most of us won’t be going willingly. We’re going to have to get through the “bubble of silliness” first. Some of this stuff will make sense and elicit a well-earned “Cool” (or “Dope” or “Sick” or what ever generational thumbs-up is appropriate). Other things will garner an equally well-earned WTF? And some will be imminently sensible but will still end up being tossed out with the bathwater anyway.

Rob Garner always says “adoption follows function” This is true, but each of us has different thresholds for what we deem to be functional. If technology starts moving that bar, we know, thanks to the work of Everett Rogers and others, that the audience’s acceptance of that will follow the inevitable bell curve. Functionality is not equal in the eyes of all beholders.

The other problem with these new interfaces with technology is that function is currently scattered around like a handful of grass clippings in the wind. Sure, there are shards of usefulness, but unless you’re willing to wear more layers of wearable tech than your average early adopting Eskimo (or, as we say here in the politically correct north – Inuit), it’s difficult to see how this can significantly improve our day-to-day lives.

The other thing we have to grapple with is what I would call the WACF – The Weird and Creepy Factor. How exactly do we feel about having the frequency of our butt imprinting our sofa, our bank balance, our blood pressure and our body fat percentage beamed up to the data center of a start up we’d never heard of before last Friday? I’m an admitted early adopter and I have to confess – I’m not ready to make that leap right now.

It’s not just the privacy of my personal data that’s holding me back, although that is certainly a concern. Part of this goes back to something I talked about a few columns back – the redefinition of what it means to “be” online rather than “go” online. With wearable technology, we’re always “on” – plugged into the network and sharing data whether we’re aware of it or not.  This requires us with a philosophical loss of control. Chances are that we haven’t given this a lot of rational consideration, but it contributes to that niggling WACF that may be keeping us from donning the latest piece of wearable tech.

Eventually, the accumulated functionality of all this new technology will overcome all these barriers to adoption, but we will all have differing thresholds marking our surrender to the inevitable.  Garner’s assertion that adoption follows function is true, but it’s true of the functional wave as a whole and in that wave there will be winners and losers. Not all functional improvements get adopted. If all adoption followed all functional improvements, I’d be using a Dvorak keyboard right now. Betamax would have become the standard for videocassettes. And we’d be conversing in Esperanto. All functional improvements – all casualties to an audience not quite ready to embrace them.

Expect more to come.

The Psychology of Usefulness: How We Made Google a Habit

In the last two posts, I looked first at the difference between autotelic and exotelic activities, then how our brain judges the promise of usefulness. In today’s post, I want to return to the original question: How does this impact user loyalty? As we use more and more apps and destinations that rely on advertising for their revenues, this question becomes more critical for those apps and destinations.

The obvious example here is search engines, the original functional destination. Google is the king of search, but also the company most reliant on these ads. For Google, user loyalty is the difference between life and death. In 2012, Google made a shade over 50 billion dollars (give or take a few hundred million). Of this, over $43 billion came from advertising revenue (about 86%) and of that revenue, 62% came from Google’s own search destinations. That a big chunk of revenue to come from one place, so user loyalty is something that Google is paying pretty close attention to.

Now, let’s look at how durable that hold Google has on our brains is. Let’s revisit the evaluation cascade that happens in our brain each time we contemplate a task:

  • If very familiar and highly stable, we do it by habit
  • If fairly familiar but less stable, we do it by a memorized procedure with some conscious guidance
  • If new and unfamiliar, we forage for alternatives by balance effort required against

Not surprisingly, the more our brain has to be involved in judging usefulness, the less loyal we are. If you can become a habit, you are rewarded with a fairly high degree of loyalty. Luckily for Google, they fall into this category – for now. Let’s look at little more at how Google became a habit and what might have to happen for us to break this habit.

Habits depend on three things: high repetition, a stable execution environment and consistently acceptable outcomes. Google was fortunate enough to have all three factors present.

First – repetition. How many times a day do you use a search engine? For me, it’s probably somewhere between 10 and 20 times per day. And usage of search is increasing. We search more now than we did 5 years ago. If you do something that often throughout the day it wouldn’t make much sense to force your brain to actively think it’s way through that task each and every time – especially if the steps required to complete that task don’t really change that much. So, the brain, which is always looking for ways to save energy, records a “habit script” (or, to use the terminology of Ann Graybiel – “chunks”) that can play out without a lot of guidance. Searching definitely meets the requirements for the first step of forming a habit.

Second – stability. How many search engines do you use? If you’re like the majority of North Americans, you probably use Google for almost all your searches.  This introduces what we would call a stable environment. You know where to go, you know how to use it and you know how to use the output. There is a reason why Google is very cautious about changing their layout and only do so after a lot of testing. What you expect and what you get shouldn’t be too far apart. If it is, it’s called disruptive, and disruption breaks habits. This is the last thing that Google wants.

Third – acceptable outcomes. So, if stability preserves habits, why would Google change anything? Why doesn’t Google’s search experience look exactly like it did in 1998 (fun fact – if you search Google for “Google in 1998” it will show you exactly what the results page looked like)? That would truly be stable, which should keep those all important habits glued in place. Well, because expectations change. Here’s the thing about expected utility – which I talked about in the last post. Expected utility doesn’t go away when we form a habit, it just moves downstream in the process. When we do a task for the first time, or in an unstable environment, expected utility precedes our choice of alternatives. When a “habit script” or “chunk” plays out, we still need to do a quick assessment of whether we got what we expected. Habits only stay in place if the “habit script” passes this test. If we searched for “Las Vegas hotels” and Google returned results for Russian borscht, that habit wouldn’t last very long.  So, Google constantly has to maintain this delicate balance – meeting expectations without disrupting the user’s experience too much. And expectations are constantly changing.

Internet adoption over time chartWhen Google was introduced in 1998, it created a perfect storm of habit building potential. The introduction coincided with a dramatic uptick in adoption of the internet and usage of web search in particular.  In 1998, 36% of American adults were using the Internet (according to PEW). In 2000, that had climbed to 46% and by 2001 that was up to 59%. More of us were going online, and if we were going online we were also searching.  The average searches per day on Google exploded from under 10,000 in 1998 to 60 million in 2000 and 1.2 billion in 2007. Obviously, we were searching  – a lot – so the frequency of task prerequisite was well in hand.

Now – stability. In the early days of the Internet, there was little stability in our search patterns. We tended to bounce back and forth between a number of different search engines. In fact, the search engines themselves encouraged this by providing “Try your search on…” links for their competitors (an example from Google’s original page is shown below). Because our search tasks were on a number of different engines, there was no environmental stability, so no chance for the creation of a true task. The best our brains could do at this point was store a procedure that required a fair amount of conscious oversight (choosing engines and evaluating outcomes). Stability was further eroded by the fact that some engines were better at some types of searches than others. Some, like Infoseek, were better for timely searches due to their fast indexing cycles and large indexes. Some, like Yahoo, were better at canonical searches that benefited from a hierarchal directory approach. When searching in the pre-Google days, we tended to match our choice of engine to the search we were doing. This required a fairly significant degree of rational neural processing on our part, precluding the formation of a habit.

Googlebottompage1998

But Google’s use of PageRank changed the search ballgame dramatically. Their new way of determining relevancy rankings was consistently better for all types of searches than any of their competitors. As we started to use Google for more types of searches because of their superior results, we stopped using their competitors. This finally created the stability required for habit formation.

Finally, acceptable outcomes. As mentioned above, Google came out of the gate with outcomes that generally exceeded our expectations, set by the spotty results of their competitors. Now, all Google had to do to keep the newly formed habit in place was to continue to meet the user’s expectations of relevancy. Thanks to truly disruptive leap Google took with the introduction of PageRank, they had a huge advantage when it came to search results quality. Google has also done an admirable job of maintaining that quality over the past 15 years. While the gap has narrowed significantly (today, one could argue that Bing comes close on many searches and may even have a slight advantage on certain types of searches) Google has never seriously undershot the user’s expectations when it comes to providing relevant search results. Therefore, Google has never given us a reason to break our habits. This has resulted in a market share that has hovered over 60% for several years now.

When it comes to online loyalty, it’s hard to beat Google’s death grip on search traffic. But, that grip may start to loosen in the near future. In my next post, I’ll look the conditions that can break habitual loyalty, again using Google as an example. I’ll also look at how our brains decide to accept or reject new useful technologies.