In the past five posts, I’ve been looking at how we choose to accept new technologies. As part of that, we’ve had a fairly exhaustive review of the various versions of the Technology Acceptance Models proposed by Fred Davis, Richard Bagozzi and, most prolifically, Viswanath Venkatesh.
Before forging ahead, I’d like to provide a brief recap of primary thoughts behind the models.
In the first post, I explored the different between autotelic and exotelic activities. The first we do for the sheer enjoyment of the activity itself – our reward is inherent in the doing. Exotelic activities are the things we do because we have to. There is little to no reward in them. Generally, when we’re judging usefulness, it’s to complete an exotelic activity. In judging usefulness, the emotion most commonly invoked is an aversion to risk – so it carries a negative emotional valency, although a relatively mild one – typically invoking anxiety or concern rather than outright fear or dread. The degree of emotional valency is generally quite low – it’s more a calculation of the resources required vs the usefulness expected.
Next, in the second post, I explained why I believe that our judgement of usefulness is based on a fairly heuristic calculation of the brain. It’s similar to the same mechanisms we use when foraging for food. Because of that belief, I’ve borrowed heavily from the previous work done by Pirolli and Card on Information Foraging and also Eric Charnov’s work on Optimal Foraging and his Marginal Value Theorem.
Because there’s little emotional engagement, we also tend to make useful resources habits if the frequency is high enough.This is the ground I covered in posts Three and Four. First, using Google as an example, I looked at how habits are created and maintained. Then, in the next post, I looked at the factors that might disrupt a habit, forcing us to look for a viable alternative. The more the brain has to be involved in judging usefulness – the less loyal we tend to be.
Also, we will only seek new technologies if: a) our current technology no longer meets our expectations, which are often reset because; b) we’ve become aware of a new, superior technology.
Then, we have to decide whether or not to accept a new technology. There have been several attempts to create a model that can predict the acceptance of a new technology. Most relied on the same foundational assumptions:
- Some mix of internal and external motivators will result in the creation of an attitude.
- Depending on the valence of the attitude (either negative or positive) we may form an intention to use the technology.
- Once this intention is formed, it leads to usage.
All the modifications to the model (5 revisions at last count) focused in the first two of these three assumptions, offering alternatives for the motivators that create the attitude. Some versions removed the attitude step completely and moved directly to intention. But none of them changed the assumed progression from intention to usage.
The useful parts of these models that I wanted to carry forward are:
- Intentions are formed by a heuristic balancing of negative and positive factors in the adopter’s mind, often labeled Perceived Usefulness and Perceived Ease of Use.
- External factors, such as the opinions of others, impact our decisions to adopt a technology.
- The cognitive process involved roughly corresponds to a Bayesian analysis, where we set a “prior” – our original attitude, and update it based on new information gathered through the decision process.
The potentially flawed assumptions I would like to leave behind are:
- The process is typically a linear one, moving from the left of the model (attitude) to the right of the model (usage).
- There are no mediating factors between the intention and usage boxes in any of the models.
In 2007, one of the original authors of the first TAM, Bagozzi, said it was time for a paradigm shift in the thinking about technology acceptance. He brought in an entirely new context in which to think about the acceptance of technology – the striving for and achievement of goals. This created a more holistic view of the decision process, where the acceptance of technology wasn’t artificially isolated, but was part of a much broader frame where that acceptance was contingent on a hierarchy of goals and sub goals. What I particularly liked was the addition of “desire” as a step, and also the introduction of self regulation as a mediating factor. Bagozzi was the first to indicate that the process was possibly more recursive, an iterative cycle rather than a linear path.
Bagozzi’s inclusion of goal setting and achievement builds a context for adoption. This aligns with Everett Roger’s extensive work in innovation adoption, in which he said,
An important factor regarding the adoption rate of an innovation is its compatibility with the values, beliefs, and past experiences of individuals in the social system.
While the acceptance of a technology may be a personal decision, it is almost always set within a broader social context. All the versions of Technology Acceptance Models I looked at included some type of social mediation in the acceptance process. But it was more of a factor in the creation of an initial attitude and a mediating factor in the progression from attitude to intention to behavior. In other words, if the acceptance of a technology made you socially unpopular, you would probably change your mind and reject it.
But when we choose to achieve a goal, there is a cognitive process that happens which creates a framework for acceptance. The goal becomes the primary evaluative topic and the technology generally becomes secondary to it. Bagozzi recognized that the two are interlinked and have to be evaluated together. We choose a goal, divide this into sub-goals and then seek how to execute against these goals.
Let’s use a personal example to see how goals and technology acceptance are intrinsically linked. Let’s say our goal is to get healthier. This breaks down into several sub-goals: Beginning a regular exercise routine, eating better, losing weight, drinking less, etc. Each of these then can be further divided into more specific goals. Let’s take eating better. It could involve tracking our calories, paying more attention to nutrition labels, including more fresh fruits and vegetables in our diet, cutting out sugar and avoiding processed foods. At this point, we may decide to use a tool like Livestrong‘s MyPlate Calorie tracker. If you were to use one of the various versions of TAM to predict the acceptance of this new technology, you would artificially divorce the act of acceptance from the broader goal hierarchy that precedes it. According to TAM, your acceptance of MyPlate would be determined by your evaluation of the ease of use and the expected usefulness. While undoubtedly important, these two factors are completely dependent on the mental scaffolding you’ve built around the idea of getting healthier. There are a myriad of factors that live beyond these that would have some impact on your eventual acceptance or rejection of the technology in question. For example, perhaps you decide that calorie counting is not the best path to eating better and so any tool that counts calories gets rejected out of hand. Or, perhaps you fall off the wagon with your eating plan and reject the tool not because it’s not useful, or easy to use, but simply because counting calories constantly reminds you how weak your will power is.
So, with the past posts recapped, next post we’ll forge forward with a proposed new Technology Acceptance Model.