The Psychology of Usefulness: The Acceptance of Technology – Part One

oldpeopletech7_317161In the last post, I talked about what it takes to break a habit built around an online tool, website or application. In today’s post, I want to talk about what happens when we decide to replace that functional aid, whatever it might be.

So, as I said last time, the biggest factor contributing to the breakdown of habit is the resetting of our expectation of what is an acceptable outcome. If our current tools no longer meet this expectation, then we start shopping for a new alternative. In marketing terms, this would be the triggering of need.

Now, this breakdown of expectation can play out in one of two ways. First, if we’re not aware of an alternative solution, we may just feel an accumulation of frustration and dissatisfaction with our current tools. This build up of frustration can create a foundation for further “usefulness foraging” but generally isn’t enough by itself to trigger action. This lends support to my hypothesis that we’re borrowing the evolved Marginal Value algorithm to help us judge the usefulness of our current tools. To put it in biological terms we’re more familiar with, “A bird in the hand is worth two in the bush.” You don’t leave a food patch unless: A) you are reasonably sure there’s another, more promising, patch that can be reached with acceptable effort or B) you have completely exhausted the food available in the patch you’re in. I believe the same is true for usefulness. We don’t throw out what we have until we either know there’s an acceptable alternative that promises a worthwhile increase in usefulness or our current tool is completely useless. Until then, we put up with the frustration.

The Technology Acceptance Model

Let’s say that we have decided that it’s worth the effort to find an alternative. What are the mechanisms we use to find the best alternative? Fred Davis and Richard Bagozzi tackled that question in 1989 and came up with the first version of their Technology Acceptance Model. It took the Theory of Reasoned Action, developed by Martin Fishbein and Icek Ajzen, put forward a decade earlier (1975, 1980) and tried to apply it to the adoption of a new technology. They also relied on the work Everett Rogers did in the diffusion of technology.

First of all, like all models, the TAM had to make some assumptions to simplify real world decisions down to a theoretical model. And, in doing so, it has required a number of revisions to try to bring it closer to what technology adoption decisions look like in the real world.

Let’s start with the foundation of the Theory of Reasoned Action. In it’s simplest form, the TRA says that voluntary behavior is predicted by an individual’s attitude towards that behavior and how they think others would think of them if they performed that behavior.

TRA

So, let’s take the theory for a test drive – if you believe that exercising will increase your health and you also believe that others in your social circle will applaud you for exercising, you’ll exercise. With this example, I think you begin to see where the original TRA may run into problems. Even with the best of intentions, we may not actually make it to the gym. Fishbein and Ajzen’s goal was to create an elegant, parsimonious model that would reliably predict both behaviors and intentions, creating a distinction between the two. Were they successful?

In a meta-analysis of TRA, Sheppard et al (1988) found that attitude was a fairly accurate predictor of intention. If you believe going to the gym is a good thing, you will probably intend to go to the gym. The model didn’t do quite as good a job in predicting behavior. Even if you did intend to go to the gym, would you actually go?

The successful progression from intention to behavior seemed to be reliant on several real world factors, including the time between intention and action (the longer the time interval, the more the degree of erosion of intention) and also lack of control. For example, in the gym example, what if your gym suddenly increased it’s membership fees, or a sudden snowstorm made it difficult to drive there.

Also, if you were choosing from a set of clear alternatives and had to choose one, TRA did a pretty good job of predicting behaviors. But if alternatives were undetermined, or there were other variables to consider, then the predictive accuracy of TRA dropped significantly.

Let me offer an example of how TRA might not work very well in a real world setting. In my book, The BuyerSphere Project, I spent a lot of time looking at the decision process in B2B buying scenarios. If we used the TRA model, we could say that if a buyer had to choose between 4 different software programs for their company, we could use their attitudes towards each of the respective programs as well as the aggregated (and weighted  – because not every opinion should carry the same weight) attitudes towards these programs of the buyer’s co-workers, peers and bosses to determine their intention. And once we have their intention, that should lead to behavior.

But in this scenario, let’s look at some of the simplifying assumptions we’ve had to make to try to cram a real world scenario into the Fishbein Azjen model:

  • We assume a purchase will have to be made from one of the four alternatives. In a real world situation, the company may well decide to stick with what they have
  • We assume the four choices will remain static and we won’t get a new candidate out of left field
  • We assume that attitudes towards each of the alternatives will remain static through the behavioral interval and won’t change. This almost never happens in B2B buying scenarios
  • We assume the buyer – or rational agent – will be in full control of their behaviors and the ultimate decision. Again, this is rarely the case in B2B buying decisions.
  • We assume that there won’t be some mitigating factor that arises in between intention and behavior – for example a spending freeze or a change in requirements.

As you can see, in trying to create a parsimonious model, Fishbein and Azjen ran into a common trap – they had to simplify to the point where it failed to work consistently in the real world.

But, in this review by Alice Darnell, she pointed out Sheppard’s main criticism of the TRA model:

Sheppard et al. (1988) also addressed the model’s main limitation, which is that it fails to account for behavioural outcomes which are only partly under the individual’s volitional control.

I’ve added bolding to the word volitional on purpose. I’ve highlighted many external factors that may lie beyond the volitional control of the individual, but I think the biggest limitation of the TRA lies in its name: Theory of Reasoned Action. It assumes that reason drives our intentions and behaviors. It doesn’t account for emotion.

Applying Reasoned Action to Technology Acceptance

Now, let’s see how Rogers and Bagozzi took Fishbein and Azien’s foundational work and applied it to the acceptance of new technologies.

In their first model (1989) they took attitudes and subjective norms (the attitudes of others) and adapted them for a more applied activity, the use of a new technological tool. They came up with two attitude drivers: Perceived Usefulness and Perceived Ease of Use. If you think back to Charnov’s Marginal Value Theorem, this is exactly the same risk/reward mechanism at work here.  In foraging, it would be yield of food over perceived required effort. In Technology Acceptance, Perceived Usefulness is the reward and Perceived Ease of Use is the risk to be calculated. In the mental calculation, Rogers and Bagozzi assume the user would do a quick mental calculation, using their own knowledge and the knowledge of others to come up with a Usefulness/Ease value that would create their attitude towards using.  This then becomes their Behavioral Intention to Use – which should lead to Actual System Use.

tam

The TAM model was clean and parsimonious. There was just one problem. It didn’t do a very good job of predicting usage in real world situations. There seemed to be much more at work here in actual decisions to accept technologies. In the next post, we’ll look at how the TAM model was modified to bring it closer to real behaviors.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.