Finding Our Way in the Social Landscape

First published March 6, 2014 in Mediapost’s Search Insider

social-mediaLast month, Om Malik (of GigaOM fame) wrote an article in Fast Company about the user backlash against Facebook.  To be fair, It seems that what’s happening to Facebook is not so much a backlash as apathy. You have to care to lash back. This is more of a wholesale abandonment, as millions of users are going elsewhere – using single purpose apps to get their social media fix. According to the article,

“we cycle between periods in which we want all of our Internet activity consolidated and other times in which we want a bunch of elegant monotaskers. Clearly we have reentered a simplification phase.”

There’s a reason why Facebook has been desperately trying to acquire Snapchat for a reported $3 Billion. There’s also a reason why they picked up Instagram for a billion last year.  It’s because these simple little apps are leaving the home grown Facebook alternatives in the dust. Snapchat is killing Facebook’s Poke – as Mashable pointed out in this comparison.  Snapchat has consistently stayed near the top of App Annie’s most popular download chart for the past 18 months. This coincides exactly with Facebook’s release of Poke.

Screen-Shot-2013-11-14-at-11.29.55-AM

Download rates of Facebook Poke

Screen-Shot-2013-11-14-at-12.10.02-PM

Download rates of Snapchat

Malik indicates it’s because we want a simpler, streamlined experience. A recent article in Business Insider goes one step further – Facebook is just not cool anymore. The mere name induces extended eye rolling in teenagers. It’s like parking the family mini-van in the high school parking lot.  “I hate Facebook. It’s just so boring,” said one of the teens interviewed. Hate! That’s a pretty strong word. What did the Zuck ever do to garner such contempt? Maybe it’s because he’s turning 30 in a few months. Maybe it’s because he’s an old married man.

Or maybe it’s just that we have a better alternative. Malik has a good point. He indicates that we tend to oscillate between consolidation and specialization. I take a bit different view. What’s happening in social media is that we’re getting to know the landscape better. We’re finding our way. This isn’t so much about changing tastes as it is about increased familiarity and a resetting of expectations.

If you look at how humans navigate new environments, you’ll notice some striking similarities. When we encounter a new landscape, we go through three phases of way finding. We begin with relying on landmarks. These are the “highest ground” in a new, unfamiliar landscape and we navigate relative to them. They become our reference points and we don’t stray far from them. Facebook is, you guessed it, a landmark.

The next phase is called “Route Knowledge.” Here, we memorize the routes we use to get from landmark to landmark. We become to recognize the paths we take all the time. In the world of online landscapes, you could substitute the word “app” for “route.” Instagram, Snapchat, Vine and the rest are routes we use to get where we need to go quickly and easily. They’re our virtual “short cuts.”

The last stage of way finding is “Survey Knowledge.”  Here, we are familiar enough with a landscape that we’ve acquired a mental “map” of it and can mentally calculate alternative routes to get to our destination. This is how you navigate in your hometown.

What’s happening to Facebook is not so much that our tastes are swinging. It’s just that we’re confident enough in our routes/apps that we’re no longer solely reliant on landmarks.  We know what we want to do and we know the right tool to use. The next stage of wayfinding, Survey Knowledge, will require some help, however. I’ve talked in the past about the eventual emergence of meta-apps. These will sit between us and the dynamic universe of tools available. They may be largely or even completely transparent to us. What they will do is learn about us and our requirements while maintaining an inventory of all the apps at our disposal. Then, as our needs arise, it will serve up the right app for the job. These meta-apps will maintain our survey knowledge for us, keeping a virtual map of the online landscape to allow us to navigate at will.

As Facebook tries to gobble up the Instagrams and Snapchats of the world, they’re trying to become both a landmark and a meta-app. Will they succeed? I have my thoughts, but those will have to wait until a future column.

The Psychology of Usefulness – The Acceptance of Technology – Part Four

After Venkatesh and Davis released the TAM 2 model, Venkatesh further expanded the variables that went into our calculation of Perceived Usefulness in TAM 3:

TAM3

Venkatesh, V. and Bala, H. “TAM 3: Advancing the Technology Acceptance Model with a Focus on Interventions,”

Venkatesh divided the determinants of Perceived Ease of Use into two categories: Anchor determinants and Adjustment determinants. Anchor determinants where the user’s baseline and came from their general beliefs about computer and usage. In Bayesian terms, this would be the user’s “prior.” It would create the foundational attitude towards the technology in question.

Anchor determinants included:

Computer self efficacy – How proficient the user is with the current technology paradigm (i.e. how comfortable they are with computers)

Perception of External Control – How much organizational support there is for the system to be accepted?

Computer Anxiety – Is there apprehension or fear involved with using a computer?

Computer Playfulness – Spontaneity in computer interactions.

Then, Venkatesh added the Adjustment determinants. These factors come from direct experience with the technology in question and are used to “adjust” the user’s attitude towards the technology. Again, this is a very Bayesian cognitive process.

Adjusting Determinants included:

Perceived Enjoyment – Is using the system enjoyable?

Objective Usability – Is the effort actually required what it was perceived to be (resulting in either positive or negative reinforcement)

Over time some of the Anchor factors will diminish in importance (Playfulness, Anxiety) and adjustments will become stronger.

So now, in TAM 3, we have essentially the same process of acceptance, but with much more granularity in the definition of the determinants that go into the Perceptions of Ease of Use. However, with the division of determinants into the categories of “Anchor” and “Adjustment” Venkatesh starts to hit at the iterative nature of this process of acceptance. We create a baseline belief or attitude and then this becomes updated, either though external forces (the Subjective Norm) or our own internal experiences (the Adjusting Determinants). While the model indicates a linear decision path it’s now appearing likely that it’s a more recursive path.

As an interesting aside, Brown, Massey, Motoya-Weiss and Burkman (2002) said that they found variance in the importance of Perceived Ease of Use. Remember, in the original TAM model, Davis indicated that while Perceived Ease of Use does have impact over the original attitude towards a technology, he found that Perceived Usefulness is a more powerful indicator of usage intention. Brown et al found that this varies depending on whether the acceptance of a technology is mandatory or voluntary. If acceptance is mandatory, they found that Perceived Ease of Use may actually have a more important impact on system acceptance.

After TAM 3, there was a further attempt to round up the competing theories that all contributed to the evolution of TAM. This was the hopefully named Unified Theorty of Acceptance and Use of Technology (or UTAUT) – proposed in 2003 by Venkatesh, Morris, Davis and Davis:

UTAUT

Venkatesh, V., Morris, M.G., Davis, F.D., and Davis, G.B. “User Acceptance of Information Technology: Toward a Unified View,” MIS Quarterly, 27, 2003, 425-478

So, what began as an attempt to simplify our understanding of the acceptance of technology became a rather unwieldy beast. Bagozzi, who worked with Davis on the original TAM model, finally had step back in and comment:

The exposition of UTAUT is a well-meaning and thoughtful presentation. But in the end we are left with a model with 41 independent variables for predicting intentions and at least eight independent variables for predicting behavior. Even here, arguments can be made that important independent variables have been left out, because few of the included predictors are fundamental, generic or universal and future research is likely to uncover new predictors not subsumable under the existing predictors. The IS field risks being overwhelmed, confused and misled but the growing piecemeal evidence behind decision making and action in regard to technology adoption/acceptance/rejection.

Bu the biggest criticism of Venkatesh’s models from Bagozzi (2007)  pointed out the same fundamental flaw that I mentioned – the assumption that intention leads to usage. Here are Bagozzi’s main concerns with the evolution of TAM:

Is Behavior the End Goal? – Bagozzi says:

The models…fail to consider that many actions are taken not so much as ends in and of themselves but rather as means to more fundamental ends or goals.

TAM ends at behavior. Actually, it ends at intent, as it assumes that intent always leads to behavior, but we’ll come back here in a moment. Bagozzi’s point is that behavior is dependent on a broader context, and there could be an end-goal that will impact acceptance that is completely ignored in the model. For example, lets say that the specific technology to be accepted is a tool to analyze conversion data in online campaigns. The end goal is to improve ROI from all online campaigns. But, to reach this end goal, there are a number of contributory goals, including, but not limited to:

  • More efficient budget allocations
  • Improving Landing Page performance
  • Better tracking of performance data
  • Improve click throughs on online ads
  • Improve the online conversion path

The tool in question is a subset of the third item in the list. Because the decision to accept this tool is contingent on meeting a number of broader goals, it obviously becomes a critical factor in that acceptance. But Bagozzi’s point is the only place this is accounted for, presumably, is in the anticipated belief upstream in the model. Once again, TAM falls victim to its own quest for parsimony. Bagozzi argues a better approach is to understand that this is a process, and as such will include goal striving. And that is a recursive process:

In goal striving, intention formation is succeeded by planning (e.g, when, where and how to act instrumentally), overcoming obstacles, resisting temptations, monitoring progress to goal achievement, readjusting actions, maintaining effort and willpower, and reassessing and even changing goals and means. These processes fill the gaps between intention and behavior and between behavior and goal attainment and are crucial for the successful adoption and use of technology.

Finally! An acknowledgement that it’s not a straight line from intention to behavior!

The Gap between Attitude and Intention – First of all, Bagozzi disagreed with the elimination of Attitude as a preliminary step to Intention. Further, he says that even with Attitude back in place, there may be very compelling reasons why a person may agree that the technology is acceptable, but still choice not to accept it. To fill in the gap, he proposes borrowing from the Belief-Desire model. Here, even if the Belief is in place, there also needs to be Desire before the intention to act is formed. So, to recap, a User may have all the right determinants in place to decide that the technology in question is perceived to be useful (PU) and that it will be sufficiently easy to learn to use (PEU) but still not have any desire to accept the technology. Perhaps the decision to accept was made by her boss, who is an asshole and she’s resisting on principle.

In Bagozzi’s paper, he goes on at some length to outline the limitations in any of the proposed TAM models. In addition to the above points, he also suggests that things like the group, cultural and social aspects of technology acceptance are not adequately dealt with in the model, as well as emotions, self-regulation and other mediators that are common in our pursuit of goals. In short, he says that technology acceptance is too reliant on context to lend itself to general models and the decision path itself is much more complex than the models would indicate. He advocates a new foundation, based on his work of goal setting and striving:

37_10_1108_S1548-6435_2011_0000008005

Bagozzi 2007

This lays out the decision making core process, not specific to technology acceptance, but applicable to the striving towards any goal. But, in this model, there is the opportunity to “plug in” factors specific to the acceptance of a technology within a specific context. For example, inputs into Goal Desire could include any number of things: first of all, the goal itself (taking into account the entire goal hierarchy that leads to the focal goal – the acceptance of a specific technology), anticipated and anticipatory emotions, relative advantage, job fit, attitudes toward success, failure, outcome expectancies, and, of course, Perceived Usefulness and Perceived Ease of Use. The exact balance will be contingent on circumstance.

The arrow leading up to Action Desire represents mediating factors such as group norms, subjective norms, social identity, effort expectancy and attitudes towards an act.

A key addition to Bagozzi’s proposed model  is self regulation. Somewhere between desire and intention, humans have the ability to reflect on their desires and decide if they are comfortable intending to act on that desire. Specifically, in the case of technology adoption, we have to decide if the behaviors we would undertake would sit well with our moral and belief framework. Let’s say, for instance, that the technology adoption being considered would increase efficiency dramatically, allowing the company to decrease head count. You may be aware of the human cost of the decision to adopt and this may cause you to regulate your desire (increased efficiency) against it.

Below is an expanded version of Bagozzi’s model with all the inputs shown.

decisioncore

Bagozzi’s proposed model with inputs shown (Bagozzi 2007)

What is interesting in Bagozzi’s model is the chain of decision making which separates Goals desire from Behavioral desire. This is a further exploration of the transition from Attitude to Action, which was so deterministic in Venkatesh’s models. Bagozzi sets it in what, to me, is a more palatable framework. We have goals, which likely include broad goals and sub goals in some type of hierarchy. Our desire to reach these goals then have to be translated into intentions, where the actual execution required is begun to be planned out. This helps us understand the required behaviors, which then leads to behavioral desires. These desires then get translated into intentions. But throughout this chain, there are a number of mediating factors, both internal and external, that can cause reflection, resetting, outright abandonment or modification of both desires and intentions. There is no single arrow pointing to the right. There is, instead, an iterative process that allows for looping back to any one of the previous stages.

This post has become much longer and much more technical than I originally intended, so I think we’ll wrap this up and start fresh next time with a recap of the various models of Technology Acceptance, and my attempt to build a model that allows for iterative reflection and adjustment.

The Psychology of Usefulness: The Acceptance of Technology – Part Three

In Part Two of this series, I looked at Davis and Bagozzi’s Technology Acceptance Model, first proposed in 1989.

Technology_Acceptance_Model

As I said, while the model was elegant and parsimonious, it seems to simplify the realities of technology acceptance decisions too much. In 2000, Venkatesh and Davis tried to deal with this in TAM 2 – the second version of the Technology Acceptance Model.

TAM2

In this version, they added several determinants of Perceived Usefulness and demoted Perceived Ease of Use to being just one of the factors that impacted Perceived Usefulness.  Impacting this mental calculation were two mediating factors: Experience and Voluntariness. This rebalancing of factors provides some interesting insights into the mental process we go through when making a decision whether we’ll accept a new technology or not.

Let’s begin with the determinants of Perceived Usefulness in the order they appear in Venkatesh and Davis’s model:

Subjective Norm: TAM 2 resurrects one of the key components of the original Theory of Reasoned Action model – the opinions of others in your social environment.

Image: Venkatesh and Davis also included another social factor in their list of determinants – how would the acceptance of this technology impact your status in your social network? Notice that our calculation of the image enhancement potential has the Subjective Norm as an input. It’s a Bayesian prediction – we start with our perceived social image status (the prior) and adjust it based on new information, in this case the acceptance of a new technology.

Job Relevance: How applicable is the technology to the job you have to do?

Output Quality: How will this technology impact your ability to perform your job well?

Result Demonstrability: How easy is it to show the benefits of accepting the technology?

It’s interesting to note how these factors split: the first two (subjective norm and image) being related to social networks, the next two (Job Relevance and Output Quality) being part of a mental calculation of benefit and the last one, Demonstrability, bridging the two categories: How easy will it be to show others that I made the right decision?

According to the TAM 2 model, we use these factors, which combine practical task performance considerations and social status aspirations, into a rough calculation of the perceived usefulness of a technology. After this is done, we start balancing that with how easy we perceive the new technology to be to use. Venkatesh and Davis commented on this and felt that Perceived Ease of Use has a variable influence in two areas, the forming of an attitude towards the technology and a behavioral intention to use the technology. The first is pretty straight forward. Our attitude is our mental frame regarding the technology. Again, to use a Bayesian term, it’s our prior. If the attitude is positive, it’s very probably that we’ll form a behavioral intention to use the technology. But there are a few mediating factors at this point, so let’s take a closer look at the creation of Behavioral Intention..

In forming our intention, Perceived Ease of Use is just one of the determinants we use in our “Usefulness” calculation, according to the model. And it depends on a few things. It depends on efficacy – how comfortable we judge ourselves to be with the technology in question. It also depends on what resources we feel we will have access to to help us up the learning curve. But, in the forming of our attitude (and thereby our intention), Venkatesh and Davis felt that Perceived Usefulness will typically be more important than Perceived Ease of Use. If we feel a technology will bring a big enough reward, we will be willing to put up with a significant degree of pain. At least, we will in what we intend to do. It’s like making a New Year’s Resolution to lose weight. At the time we form the intention, the pain involved is sometime in the future, so we go forward with the best of intentions.

As we move forward from Attitude to Intention, this transition if further mediated in the model by our subjective norm – the cognitive context we place the decision in. Into this subjective norm falls our experience (our own evaluation of our efficacy), the attitudes of others towards the technology and also the “Voluntariness” of the acceptance. Obviously, our intention to use will be stronger if it’s a non-negotiable corporate mandate, as opposed to a low priority choice we have the latitude to make.

What is missing from the TAM 2 model is the link between Perceived Ease of Use and actual Usage. Just like a New Year’s Resolution, intentions don’t always become actions. Venkatesh and Davis said Perceived Ease of Use is a moving, iteratively updated calculation. As we gain hands-on experience, we update our original estimate of Ease of Use, either positively or negatively. If it’s positive, it’s more likely that Intention will become Usage. If negatively, the technology may fail to become accepted. In fact, I would say this feedback loop is an ongoing process that may repeat several times in the space between Intention and Usage. The model, with a single arrow going in one direction from Intention to Usage, belies the complexity of what is happening here.

Venkatesh and Davis wanted to create a more realistic model, expanding the front end of the model to account for determinants going into the creation of Intention. They also wanted to provide a model of the decision process that better represented how we balance Perceived Usefulness and Perceived Ease of Use. I think they made some significant gains here. But the model is still a linear one – going in one direction only. What they missed is the iterative nature of acceptance decisions, especially in the gap between Intention and Behavior.

In Part Four, we’ll look at TAM 3 and see how Venkatesh further modified his model to bring it closer to the real world.

The Psychology of Usefulness: The Acceptance of Technology – Part Two

In my last post, I talked about how the Theory of Reasoned Action and the original Technology Acceptance Model tried to predict both intention and usage of new technologies. As a quick recap, let’s look again at Davis and Bagozzi’s original model.

Technology_Acceptance_Model

In aiming for the simplest model possible, there was significant conflation applied to the front end of the model – with just one box representing external variables, which then led to two similarly conflated boxes: Perceived Usefulness and Perceived Ease of Use. While this simplification was admirable in the quest for parsimony, in real world situations it seemed like it went too far in this direction. There was a lot happening between the three boxes at the front of the model that demanded closer examination.

Davis indicated that there was an interesting relationship between Perceived Usefulness and Perceived Ease of Use. One of the mechanisms at play that has to be understood is self-efficacy. In understanding adoption of technology, self-efficacy is a key factor. Essential, it means that the easier a system is to use, the greater the user’s sense of efficacy. They believe they have control over what they are doing. And control, especially on a work context, is a strong motivational driver. There is an extensive body of work exploring the psychological importance of control. If we feel we’re in control, we also feel empowered to mitigate risk. The concept of self-efficacy helps to highlight the importance of the Perceived Ease of Use box. But what about the other box: Perceived Usefulness?

Davis, in his accompanying notes and research, indicated that Perceived Usefulness is a stronger indicator of intention than Perceived Ease of Use. In other words, we are willing to put up with some pain to learn a new technology if we feel it will offer a significant improvement in our ability to complete a task. This balancing equation requires two heuristic evaluations on the part of the user: the allocation of cognitive resources required to gain proficiency and the expected usefulness of the tool once proficiency is gained. This is exactly the same equation used in Charnov’s Marginal Value Theorem, applied in a different context. In optimal foraging, we (and all animals who forage) balance expenditure of resources required to reach a food patch against the expected food value to be derived from that patch. In technology adoption, we balance expenditure of resources required to master a new technology against the increased usefulness that technology offers.

In this heuristic evaluation, there are four key marketing lessons for anyone who’s business model relies on the adoption of new technology:

1)   Lessen the intimidation of the learning curve. Persuade the user (and this is a key point that I’ll return to in in point 4) that this is a reasonable investment of resources. Build a sense of perceived ease of use. Provide visible links to intuitive learning resources. Often, marketers overplay the feature benefits of their products to show how powerful they are. But, as they’re doing this, they fail to realize that this upsets the balance between perceived usefulness and perceived ease of use.

2)   Provide clear examples of perceived usefulness in terms that are immediately relevant. Remember, this is the key factor in the equation the prospect is trying to balance. The more salient you can make the perceived usefulness, the more likely the user is to adopt it, even if a learning curve is present. Ideally, get that usefulness across with very specific, industry relevant examples that allow the user to visualize usage of the technology.

3)   Remember that the user is balancing the two factors. Ease of use is great, but it can’t come at the expense of overall usefulness. In fact, in calculating the right balance (which should be done with extensive testing feedback from target customers) it should offer a significant gain in usefulness (as measured against any incumbent technologies) with a relatively manageable investment of resources.

4)   Remember that you’re talking to a user. When trying to strike the right balance, remember that you’ll probably be talking to different people as the decision progresses. For the user, the right balance between perceived usefulness and perceived ease of use must be struck. But at some point, you’ll be talking to a buyer, not a user, before the sale actually is closed. This would be one of those external variables that fall outside the scope of the Technology Adoption Model. This switching of roles from “doers” to “buyers” is dealt with extensively in my book, The BuyerSphere Project.

In the next post, I’ll talk about how the Technology Acceptance Model has been modified over the past 2 decades so it better reflects real world decision making.

The Psychology of Usefulness: The Acceptance of Technology – Part One

oldpeopletech7_317161In the last post, I talked about what it takes to break a habit built around an online tool, website or application. In today’s post, I want to talk about what happens when we decide to replace that functional aid, whatever it might be.

So, as I said last time, the biggest factor contributing to the breakdown of habit is the resetting of our expectation of what is an acceptable outcome. If our current tools no longer meet this expectation, then we start shopping for a new alternative. In marketing terms, this would be the triggering of need.

Now, this breakdown of expectation can play out in one of two ways. First, if we’re not aware of an alternative solution, we may just feel an accumulation of frustration and dissatisfaction with our current tools. This build up of frustration can create a foundation for further “usefulness foraging” but generally isn’t enough by itself to trigger action. This lends support to my hypothesis that we’re borrowing the evolved Marginal Value algorithm to help us judge the usefulness of our current tools. To put it in biological terms we’re more familiar with, “A bird in the hand is worth two in the bush.” You don’t leave a food patch unless: A) you are reasonably sure there’s another, more promising, patch that can be reached with acceptable effort or B) you have completely exhausted the food available in the patch you’re in. I believe the same is true for usefulness. We don’t throw out what we have until we either know there’s an acceptable alternative that promises a worthwhile increase in usefulness or our current tool is completely useless. Until then, we put up with the frustration.

The Technology Acceptance Model

Let’s say that we have decided that it’s worth the effort to find an alternative. What are the mechanisms we use to find the best alternative? Fred Davis and Richard Bagozzi tackled that question in 1989 and came up with the first version of their Technology Acceptance Model. It took the Theory of Reasoned Action, developed by Martin Fishbein and Icek Ajzen, put forward a decade earlier (1975, 1980) and tried to apply it to the adoption of a new technology. They also relied on the work Everett Rogers did in the diffusion of technology.

First of all, like all models, the TAM had to make some assumptions to simplify real world decisions down to a theoretical model. And, in doing so, it has required a number of revisions to try to bring it closer to what technology adoption decisions look like in the real world.

Let’s start with the foundation of the Theory of Reasoned Action. In it’s simplest form, the TRA says that voluntary behavior is predicted by an individual’s attitude towards that behavior and how they think others would think of them if they performed that behavior.

TRA

So, let’s take the theory for a test drive – if you believe that exercising will increase your health and you also believe that others in your social circle will applaud you for exercising, you’ll exercise. With this example, I think you begin to see where the original TRA may run into problems. Even with the best of intentions, we may not actually make it to the gym. Fishbein and Ajzen’s goal was to create an elegant, parsimonious model that would reliably predict both behaviors and intentions, creating a distinction between the two. Were they successful?

In a meta-analysis of TRA, Sheppard et al (1988) found that attitude was a fairly accurate predictor of intention. If you believe going to the gym is a good thing, you will probably intend to go to the gym. The model didn’t do quite as good a job in predicting behavior. Even if you did intend to go to the gym, would you actually go?

The successful progression from intention to behavior seemed to be reliant on several real world factors, including the time between intention and action (the longer the time interval, the more the degree of erosion of intention) and also lack of control. For example, in the gym example, what if your gym suddenly increased it’s membership fees, or a sudden snowstorm made it difficult to drive there.

Also, if you were choosing from a set of clear alternatives and had to choose one, TRA did a pretty good job of predicting behaviors. But if alternatives were undetermined, or there were other variables to consider, then the predictive accuracy of TRA dropped significantly.

Let me offer an example of how TRA might not work very well in a real world setting. In my book, The BuyerSphere Project, I spent a lot of time looking at the decision process in B2B buying scenarios. If we used the TRA model, we could say that if a buyer had to choose between 4 different software programs for their company, we could use their attitudes towards each of the respective programs as well as the aggregated (and weighted  – because not every opinion should carry the same weight) attitudes towards these programs of the buyer’s co-workers, peers and bosses to determine their intention. And once we have their intention, that should lead to behavior.

But in this scenario, let’s look at some of the simplifying assumptions we’ve had to make to try to cram a real world scenario into the Fishbein Azjen model:

  • We assume a purchase will have to be made from one of the four alternatives. In a real world situation, the company may well decide to stick with what they have
  • We assume the four choices will remain static and we won’t get a new candidate out of left field
  • We assume that attitudes towards each of the alternatives will remain static through the behavioral interval and won’t change. This almost never happens in B2B buying scenarios
  • We assume the buyer – or rational agent – will be in full control of their behaviors and the ultimate decision. Again, this is rarely the case in B2B buying decisions.
  • We assume that there won’t be some mitigating factor that arises in between intention and behavior – for example a spending freeze or a change in requirements.

As you can see, in trying to create a parsimonious model, Fishbein and Azjen ran into a common trap – they had to simplify to the point where it failed to work consistently in the real world.

But, in this review by Alice Darnell, she pointed out Sheppard’s main criticism of the TRA model:

Sheppard et al. (1988) also addressed the model’s main limitation, which is that it fails to account for behavioural outcomes which are only partly under the individual’s volitional control.

I’ve added bolding to the word volitional on purpose. I’ve highlighted many external factors that may lie beyond the volitional control of the individual, but I think the biggest limitation of the TRA lies in its name: Theory of Reasoned Action. It assumes that reason drives our intentions and behaviors. It doesn’t account for emotion.

Applying Reasoned Action to Technology Acceptance

Now, let’s see how Rogers and Bagozzi took Fishbein and Azien’s foundational work and applied it to the acceptance of new technologies.

In their first model (1989) they took attitudes and subjective norms (the attitudes of others) and adapted them for a more applied activity, the use of a new technological tool. They came up with two attitude drivers: Perceived Usefulness and Perceived Ease of Use. If you think back to Charnov’s Marginal Value Theorem, this is exactly the same risk/reward mechanism at work here.  In foraging, it would be yield of food over perceived required effort. In Technology Acceptance, Perceived Usefulness is the reward and Perceived Ease of Use is the risk to be calculated. In the mental calculation, Rogers and Bagozzi assume the user would do a quick mental calculation, using their own knowledge and the knowledge of others to come up with a Usefulness/Ease value that would create their attitude towards using.  This then becomes their Behavioral Intention to Use – which should lead to Actual System Use.

tam

The TAM model was clean and parsimonious. There was just one problem. It didn’t do a very good job of predicting usage in real world situations. There seemed to be much more at work here in actual decisions to accept technologies. In the next post, we’ll look at how the TAM model was modified to bring it closer to real behaviors.

The Psychology of Usefulness: How Online Habits are Broken

google-searchLast post, I talked about how Google became a habit – Google being the most extreme case of online loyalty based on functionality I could think of. But here’s the thing with functionally based loyalty – it’s very fickle. In the last post I explained how Charnov’s Marginal Value Theorem dictates how long animals spend foraging in a patch before moving on to the next one. I suspect the same principles apply to our judging of usefulness. We only stay loyal to functionality as long as we believe there are no more functional alternatives available to us for an acceptable investment of effort. If that functionality has become automated in the form of a habit, we may stick with it a little longer, simply because it takes our rational brain awhile to figure out there may be better options, but sooner or later it will blow the whistle and we’ll start exploring our options. Charnov’s internal algorithm will tell us it’s time to move on to the next functional “patch.”

Habits break down when there’s a shift if one of the three prerequisites: frequency, stability or acceptable outcomes.

If we stop doing something on a frequent basis, the habit will slowly decay. But because habits tend to be stored at the limbic level (in the basal ganglia), they prove to be remarkably durable. There’s a reason we say old habits die hard. Even after a long hiatus we find that habits can easily kick back in. Reduction of frequency is probably the least effective way to break a habit.

A more common cause of habitual disruption is a change in stability. Suddenly, if something significant changes in our task environment, our  “habit scripts” start running into obstacles. Think about the last time you did a significant upgrade to a program or application you use all the time. If menu options or paths to common functions change, you find yourself constantly getting frustrated because things aren’t where you expect them to be. Your habit scripts aren’t working for you anymore and you are being forced to think. That feeling of frustration is how the brain protects habits and shows how powerful our neural energy saving mode is. But, even if the task environment becomes unstable for a time, chances are the instability is temporary. The brain will soon reset its habits and we’ll be back plugging subconsciously away at our tasks. Instability does break a habit, but it just rebuilds a new one to take its place.

A more permanent form of habit disruption comes when outcomes are no longer acceptable. The brain hates these types of disruptions, because it knows that finding an alternative could require a significant investment of effort. It basically puts us back at square one. The amount of investment required is dependent on a number of things, including the scope of change required (is it just one aspect of a multi-step task or the entire procedure?), current awareness of acceptable alternatives (is a better solution near at hand or do we have to find it?), the learning curve involved (how different is the alternative from what we’re used to using), are there other adoption requirements (do we have to make an investment of resources – including time and/or money?) and how much down time will be involved in order to adopt the alternative. All these questions are the complexities that can be factors in the Marginal Value Theorem.

Now, let’s look at how each of these potential habit breakers applies to Google. First of all, frequency probably won’t be a factor because we will search more, not less, in the future.

Stability may be a more likely cause. The fact is, the act of online searching hasn’t really changed that much in the last 20 years. We still type in a query and get a list of results. If you look at Google circa 1998, it looks a little clunky and amateurish next to today’s results page, but given that 16 years have come and gone, the biggest surprise is that the search interface hasn’t changed more than it has.

Google now and then

A big reason for this is to maintain stability in the interface, so habits aren’t disrupted. The search page relies on ease of information foraging, so it’s probably the most tested piece of online real estate in history. Every pixel of what you see on Google, and, to a lesser extent, it’s competitors, has been exhaustively tested.

That has been true in the past but because of the third factor, acceptability of outcomes, it’s not likely to remain true in the future. We are now in the age of the app. Searching used to be a discrete function that was just one step of many required to complete a task. We were content to go to a search engine, retrieve information and then use that information elsewhere with other tools or applications. In our minds, we had separate chunks of online functionality that we would assemble as required to meet our end goal.

Let me give you an example. Let’s imagine we’re going to London for a vacation. In order to complete the end goal – booking flights, hotels and whatever else is required – we know we will probably have to go to many different travel sites, look up different types of information and undertake a number of actions. We expect that this will be the best path to take to our end goal. Each chunk of this “master task” may in turn be broken down into separate sub tasks. Along the way, we’ll be relying on those tools that we’re aware of and a number of stored procedures that have proven successful in the past. At the sub-task level, it’s entirely possible that some of those actions have been encoded as habits. For an example of how these tasks and stored procedures would play out in a typical search, see my previous post, A Cognitive Walkthrough of Searching.

But we have to remember that the only reason the brain is willing to go to all this work is that it believes it’s the most efficient route available to it. If there were a better alternative that would produce an acceptable outcome, the brain would take it. Our expectation of what an acceptable outcome would be would be altered, and our Marginal Value algorithm would be reset.

Up to now, functionality and information didn’t intersect too often online. There were places we went to get information, and there were places we went to do things. But from this point forward, expect those two aspects of online to overlap more and more often. Apps will retrieve information and integrate it with usefulness. The travel aggregator sites like Kayak and Expedia are an early example of this. They retrieve pricing information from vendors, user content from review sites and even some destination related information from travel sites. This ups the game in terms of what we expect from online functionality when we book a trip. Our expectation has been reset because Kayak offers a more efficient way to book travel than using search engines and independent vendor sites. That’s why we don’t immediately go to Google when we’re planning a trip.

Let’s fast-forward a few years to see how our expectations could be reset in the future. I suspect we’re not too far away from having an app where our travel preferences have been preset. This proposed app would know how we like to travel and the things we like to do when we’re on vacation. It would know the types of restaurants we like, the attractions we visit, the activities we typically do, the types of accommodation we tend to book, etc.  It would also know the sources we tend to use when qualifying our options (i.e. TripAdvisor). If we had such an app, we would simply put in the bare details of our proposed trip: departure and return dates, proposed destinations and an approximate itinerary. It would then go and assemble suggestions based on our preferences, all in one location. Booking would require a simple click, because our payment and personal information would be stored in the app. There would be no discrete steps, no hopping back and forth between sites, no cutting and pasting of information, no filling out forms with the same information multiple times. After confirmation, the entire trip and all required information would be made available on your mobile device.  And even after the initial booking, the app would continue to comb the internet for new suggestions, reviews or events that you might be interested in attending.

This “mega-app” would take the best of Kayak, TripAdvisor, Yelp, TripIt and many other sites and combine it all in one place. If you love travel as much as I do, you couldn’t wait to get your hands on such an app. And the minute you did, your brain would have reset it’s idea of what an acceptable outcome would be. There would be a cascade of broken habits and discarded procedures.

This integration of functionality and information foraging is where the web will go next. Over the next 10 years, usefulness will become the new benchmark for online loyalty. As this happens, our expectation set points will be changed over and over again. And this, more than anything, will be what impacts user loyalty in the future. This changing of expectations is the single biggest threat that Google faces.

In the next post I’ll look at what happens when our expectations get reset and we have to look at adopting a new technology.

The Psychology of Usefulness: How We Made Google a Habit

In the last two posts, I looked first at the difference between autotelic and exotelic activities, then how our brain judges the promise of usefulness. In today’s post, I want to return to the original question: How does this impact user loyalty? As we use more and more apps and destinations that rely on advertising for their revenues, this question becomes more critical for those apps and destinations.

The obvious example here is search engines, the original functional destination. Google is the king of search, but also the company most reliant on these ads. For Google, user loyalty is the difference between life and death. In 2012, Google made a shade over 50 billion dollars (give or take a few hundred million). Of this, over $43 billion came from advertising revenue (about 86%) and of that revenue, 62% came from Google’s own search destinations. That a big chunk of revenue to come from one place, so user loyalty is something that Google is paying pretty close attention to.

Now, let’s look at how durable that hold Google has on our brains is. Let’s revisit the evaluation cascade that happens in our brain each time we contemplate a task:

  • If very familiar and highly stable, we do it by habit
  • If fairly familiar but less stable, we do it by a memorized procedure with some conscious guidance
  • If new and unfamiliar, we forage for alternatives by balance effort required against

Not surprisingly, the more our brain has to be involved in judging usefulness, the less loyal we are. If you can become a habit, you are rewarded with a fairly high degree of loyalty. Luckily for Google, they fall into this category – for now. Let’s look at little more at how Google became a habit and what might have to happen for us to break this habit.

Habits depend on three things: high repetition, a stable execution environment and consistently acceptable outcomes. Google was fortunate enough to have all three factors present.

First – repetition. How many times a day do you use a search engine? For me, it’s probably somewhere between 10 and 20 times per day. And usage of search is increasing. We search more now than we did 5 years ago. If you do something that often throughout the day it wouldn’t make much sense to force your brain to actively think it’s way through that task each and every time – especially if the steps required to complete that task don’t really change that much. So, the brain, which is always looking for ways to save energy, records a “habit script” (or, to use the terminology of Ann Graybiel – “chunks”) that can play out without a lot of guidance. Searching definitely meets the requirements for the first step of forming a habit.

Second – stability. How many search engines do you use? If you’re like the majority of North Americans, you probably use Google for almost all your searches.  This introduces what we would call a stable environment. You know where to go, you know how to use it and you know how to use the output. There is a reason why Google is very cautious about changing their layout and only do so after a lot of testing. What you expect and what you get shouldn’t be too far apart. If it is, it’s called disruptive, and disruption breaks habits. This is the last thing that Google wants.

Third – acceptable outcomes. So, if stability preserves habits, why would Google change anything? Why doesn’t Google’s search experience look exactly like it did in 1998 (fun fact – if you search Google for “Google in 1998” it will show you exactly what the results page looked like)? That would truly be stable, which should keep those all important habits glued in place. Well, because expectations change. Here’s the thing about expected utility – which I talked about in the last post. Expected utility doesn’t go away when we form a habit, it just moves downstream in the process. When we do a task for the first time, or in an unstable environment, expected utility precedes our choice of alternatives. When a “habit script” or “chunk” plays out, we still need to do a quick assessment of whether we got what we expected. Habits only stay in place if the “habit script” passes this test. If we searched for “Las Vegas hotels” and Google returned results for Russian borscht, that habit wouldn’t last very long.  So, Google constantly has to maintain this delicate balance – meeting expectations without disrupting the user’s experience too much. And expectations are constantly changing.

Internet adoption over time chartWhen Google was introduced in 1998, it created a perfect storm of habit building potential. The introduction coincided with a dramatic uptick in adoption of the internet and usage of web search in particular.  In 1998, 36% of American adults were using the Internet (according to PEW). In 2000, that had climbed to 46% and by 2001 that was up to 59%. More of us were going online, and if we were going online we were also searching.  The average searches per day on Google exploded from under 10,000 in 1998 to 60 million in 2000 and 1.2 billion in 2007. Obviously, we were searching  – a lot – so the frequency of task prerequisite was well in hand.

Now – stability. In the early days of the Internet, there was little stability in our search patterns. We tended to bounce back and forth between a number of different search engines. In fact, the search engines themselves encouraged this by providing “Try your search on…” links for their competitors (an example from Google’s original page is shown below). Because our search tasks were on a number of different engines, there was no environmental stability, so no chance for the creation of a true task. The best our brains could do at this point was store a procedure that required a fair amount of conscious oversight (choosing engines and evaluating outcomes). Stability was further eroded by the fact that some engines were better at some types of searches than others. Some, like Infoseek, were better for timely searches due to their fast indexing cycles and large indexes. Some, like Yahoo, were better at canonical searches that benefited from a hierarchal directory approach. When searching in the pre-Google days, we tended to match our choice of engine to the search we were doing. This required a fairly significant degree of rational neural processing on our part, precluding the formation of a habit.

Googlebottompage1998

But Google’s use of PageRank changed the search ballgame dramatically. Their new way of determining relevancy rankings was consistently better for all types of searches than any of their competitors. As we started to use Google for more types of searches because of their superior results, we stopped using their competitors. This finally created the stability required for habit formation.

Finally, acceptable outcomes. As mentioned above, Google came out of the gate with outcomes that generally exceeded our expectations, set by the spotty results of their competitors. Now, all Google had to do to keep the newly formed habit in place was to continue to meet the user’s expectations of relevancy. Thanks to truly disruptive leap Google took with the introduction of PageRank, they had a huge advantage when it came to search results quality. Google has also done an admirable job of maintaining that quality over the past 15 years. While the gap has narrowed significantly (today, one could argue that Bing comes close on many searches and may even have a slight advantage on certain types of searches) Google has never seriously undershot the user’s expectations when it comes to providing relevant search results. Therefore, Google has never given us a reason to break our habits. This has resulted in a market share that has hovered over 60% for several years now.

When it comes to online loyalty, it’s hard to beat Google’s death grip on search traffic. But, that grip may start to loosen in the near future. In my next post, I’ll look the conditions that can break habitual loyalty, again using Google as an example. I’ll also look at how our brains decide to accept or reject new useful technologies.

The Psychology of Usefulness: How Our Brains Judge What is Useful

To-Do-ListDid you know that “task” and “tax” have the same linguistic roots? They both come from the Latin “taxare” – meaning to appraise. This could explain the lack of enthusiasm we have for both.

Tasks are what I referred to in the last post as an exotelic activity – something we have to do to reach an objective that carries no inherent reward. We do them because we have to do them, not because we want to do them.

When we undertake a task, we want to find the most efficient way to get it done. Usefulness becomes a key criterion. And when we judge usefulness, there are some time-tested procedures the brain uses.

Stored Procedures and Habits

The first question our brain asks when undertaking a task is – have we done this before? Let’s first deal with what happens if the answer is yes:

If we’ve done something before our brains – very quickly and at a subconscious level – asks a number of qualifying questions:

–       How often have we done this?

–       Does the context in which the task plays out remain fairly consistent (i.e. are we dealing with a stable environment)?

–       How successful have we been in carrying out this task in the past

If we’ve done a task a number of times in a stable environment with successful outcomes, it’s probably become a habit. The habit chunk is retrieved from the basal ganglia and plays out without much in the way of rational mediation. Our brain handles the task on autopilot.

If we have less familiarity with the task, or if there’s less stability in the environment, but have done it before we probably have stored procedures, which are set procedural alternatives. These require more in the way of conscious guidance and often have decision points where we have to determine what we do next, based on the results of the previous action.

If we’re entering new territory and can’t draw on past experience, our brains have to get ready to go to work. This is the route least preferred by our brain. It only goes here when there’s no alternative.

Judging Expected Utility and Perceived Risk

If a task requires us to go into unfamiliar territory, there are new routines that the brain must perform. Basically, the brain must place a mental bet on the best path to take, balancing a prediction of a satisfactory outcome against the resources required to complete the task. Psychologists call this “Expected Utility.”

Expected Utility is the brain’s attempt to forecast scenarios that require the balancing of risks and rewards where the outcomes are not known.  The amount of processing invested by the brain is usually tied to the size of the potential risk and reward. Low risk/reward scenarios require less rationalization. The brain drives this balance by using either positive or negative emotional valences, interpreted by us as either anticipation or anxiety. Our emotional balance correlates with the degree of risk or reward.

Expected utility is more commonly applied in financial decision and game theory. In the case of conducting a task, there is usually no monetary element to risk and reward. What we’re risking is our own resources – time and effort. Because these are long established evolved resources, it’s reasonable to assume that we have developed subconscious routines to determine how much effort to expend in return for a possible gain. This would mean that these cognitive evaluations and calculations may happen at a largely subconscious level, or at least, more subconscious than the processing that would happen in evaluating financial gambles or those involving higher degrees of risk and reward.  In that context, it might make sense to look at how we approach another required task – finding food.

Optimal Foraging and Marginal Value

Where we balance gain against expenditure of time and effort, the brain has some highly evolved routines that have developed over our history. The oldest of these would be how we forage for food. But, we also have a knack of borrowing strategies developed for other purposes and using them in new situations.

Pirolli and Card (1999) found, for instance, that we use our food foraging strategies to navigate digital information. Like food, information online tends to be “patchy” and of varying value to us. Often, just like looking for a food source, we have to forage for information by judging the quality of hyperlinks that may take us to those information sources or “patches.” Pirolli and Card called these clues to the quality of information that may lie on the other end of links information scent.

Cartoon_foraging_theoryTied with this foraging strategy is the concept of Marginal Value.  This was first proposed by Eric Charnov in 1976 as a evolved strategy for determining how much time to spend in a food patch before deciding to move on. In a situation with diminishing returns (ie depleted food supplies) the brain must balance effort expended against return. If you happen on a berry bush in the wild, with a reasonable certainty that there are other bushes nearby (perhaps you can see them just a few steps away) you have to mentally solve the following equation – how many berries can be gathered with a reasonable expenditure of effort vs. how much effort would it take to walk to the next bush and how many berries would be available there?

This is somewhat analogous to information foraging, with one key difference. Information isn’t depleted as you consume it. So the rule of diminishing returns is less relevant. But if, as I suspect, we’ve borrowed these subconscious strategies for judging usefulness – both in terms of information and functionality – in online environment, our brains may not know or care about the subtle differences in environments.

The reason why we may not be that rational in the application of these strategies in online encounters is that they play out below the threshold of consciousness. We are not constantly and consciously adjusting our marginal value algorithm or quantifiably assessing the value of an information patch. No, our brains use a quicker and more heuristic method to mediate our output of effort – emotions. Frustration and anxiety tell us it’s time to move onto the next site or application. Feelings of reward and satisfaction indicate we should stay right where we are. The remarkable thing about this is that as quick and dirty as these emotional guidelines are, if you went to the trouble of rationally quantifying the potential of all possible alternatives, using a Bayesian approach, for instance, you’d probably find you ended up in pretty much the same place. These strategies, simmering below the surface of our consciousness, are pretty damn accurate!

So, to sum up this post, when judging the most useful way to get a task done, we have an evaluation cascade that happens very quickly in our brain:

  • If a very familiar task needs to be done in a stable environment, our habits will take over and it will be executed with little or no rational thought.
  • If the task is fairly familiar but requires some conscious guidance, we’ll retrieve a stored procedure and look for successful feedback as we work through it.
  • If a task is relatively new to us, we’ll forage through alternatives for the best way to do it, using evolved biological strategies to help balance risk (in terms of expended effort) against reward.

Now, to return to our original question, how does this evaluation cascade impact long and short-term user loyalty? I’ll return to this question in my next post.

Revisiting Entertainment vs Usefulness

brain-cogsSome time ago, I did an extensive series of posts on the psychology of entertainment. My original goal, however, was to compare entertainment and usefulness in how effective they were in engendering long-term loyalty. How do our brains process both? And, to return to my original intent, in that first post almost 4 years ago, how does this impact digital trends and their staying power?

My goal is to find out why some types of entertainment have more staying power than other types. And then, once we discover the psychological underpinnings of entertainment, lets look at how that applies to some of the digital trends I disparaged: things like social networks, micro-blogging, mobile apps and online video. What role does entertainment play in online loyalty? How does it overlap with usefulness? How can digital entertainment fads survive the novelty curse and jump the chasm to a mainstream trends with legs?

In the previous set of posts, I explored the psychology of entertainment extensively, ending up with a discussion of the evolutionary purpose of entertainment. My conclusion was that entertainment lived more in the phenotype than the genotype. To save you going back to that post, I’ll quickly summarize here: the genotype refers to traits actually encoded in our genes through evolution – the hardwired blueprint of our DNA. The phenotype is the “shadow” of these genes – behaviors caused by our genetic blueprints. Genotypes are directly honed by evolution for adaptability and gene survival. Phenotypes are by-products of this process and may confer no evolutionary advantage. Our taste for high-fat foods lives in the genotype – the explosion of obesity in our society lives in the phenotype.

This brings us to the difference between entertainment and usefulness – usefulness relies on mechanisms that predominately live in the genotype.  In the most general terms, it’s the stuff we have to do to get through the day. And to understand how we approach these things on our to-do list, it’s important to understand the difference between autotelic and exotelic activities.

Autotelic activities are the things we do for the sheer pleasure of it. The activity is it’s own reward. The word autotelic is Greek for “self + goal” – or “having a purpose in and not apart from itself.” We look forward to doing autotelic things. All things that we find entertaining are autotelic by nature.

Exotelic activities are simply a necessary means to an end. They have no value in and of themselves.  They’re simply tasks – stuff on our to do list.

The brain, when approaching these two types of activities, treats them very differently. Autotelic activities fire our reward center – the nucleus accumbens. They come with a corresponding hit of dopamine, building repetitive patterns. We look forward to them because of the anticipation of the reward. They typically also engage the prefrontal medial cortex, orchestrating complex cognitive behaviors and helping define our sense of self. When we engage in an autotelic activity, there’s a lot happening in our skulls.

Exotelic activities tend to flip the brain onto its energy saving mode. Because there is little or no neurological reward in these types of activities (other than a sense of relief once they’re done) they tend to rely on the brain’s ability to store and retrieve procedures. With enough repetition, they often become habits, skipping the brain’s rational loop altogether.

In the next post, we’ll look at how the brain tends to process exotelic activities, as it provides some clues about the loyalty building abilities of useful sites or tools. We’ll also look at what happens when something is both exotelic and autotelic.

The Death and Rebirth of Google+

google_plus_logoGoogle Executive Chairman Eric Schmidt has come out with his predictions for 2014 for Bloomberg TV. Don’t expect any earth-shaking revelations here. Schmidt plays it pretty safe with his prognostications:

Mobile has won – Schmidt says everyone will have a smartphone. “The trend has been mobile was winning..it’s now won.” Less a prediction than stating the obvious.

Big Data and Machine Intelligence will be the Biggest Disruptor – Again, hardly a leap of intuitive insight. Schmidt foresees the evolution of an entirely new data marketplace and corresponding value chain. Agreed.

Gene Sequencing Has Promise in Cancer Treatments – While a little fuzzier than his other predictions, Schmidt again pounces on the obvious. If you’re looking for someone willing to bet the house on gene sequencing, try LA billionaire Patrick Soon-Shiong.

See Schmidt’s full clip:

The one thing that was interesting to me was an admission of failure with Google+:

The biggest mistake that I made was not anticipating the rise of the social networking phenomenon.  Not a mistake we’re going to make again. I guess in our defense we were busy working on many other things, but we should have been in that area and I take responsibility for that.

I always called Google+ a non-starter, despite a deceptively encouraging start. But I think it’s important to point out that we tend to judge Google+ against Facebook or other social destinations. As Google+ Vice President of Product Bradley Horowitz made clear in an interview last year with Dailytech.com, Google never saw this as a “Facebook killer.”

I think in the early going there was a lot of looking for an alternative [to Facebook, Twitter, etc.],” said Horowitz. “But I think increasingly the people who are using Google+ are the people using Google. They’re not looking for an alternative to anything, they’re looking for a better experience on Google.

social-networkAnd this highlights a fundamental change in how we think about online social activity – one that I think is more indicative of what the future holds. Social is not a destination, social is a paradigm. It’s a layer of connectedness and shared values that acts as a filter, a lens  – a way we view reality. That’s what social is in our physical world. It shapes how we view that world. And Horowitz is telling us that that’s how Google looks at social too. With the layering of social signals into our online experience, Google+ gives us an enhanced version of our online experience. It’s not about a single destination, no matter how big that destination might be. It’s about adding richness to everything we do online.

Because humans are social animals our connections and our perception of ourselves as part of an extended network literally shape every decision we make and everything we do, whether we’re conscious of the fact or not. We are, by design, part of a greater whole. But because online, social originated as distinct destinations, it was unable to impact our entire online experience. Facebook, or Pinterest, act as a social gathering place – a type of virtual town square – but social is more than that. Google+ is closer to this more holistic definition of “social.”

I’m not  sure Google+ will succeed in becoming our virtual social lens, but I do agree that as our virtual sense of social evolves, it will became less about distinct destinations and more about a dynamic paradigm that stays with us constantly, helping to shape, sharpen, enhance and define what we do online. As such, it becomes part of the new way of thinking about being online – not going to a destination but being plugged into a network.