Five Years Later – An Answer to Lance’s Question (kind of)

112309-woman-internetIt never ceases to amaze me how writing can take you down the most unexpected paths, if you let it. Over 5 years ago now, I wrote a post called “Chasing Digital Fluff – Who Cares about What’s Hot?” It was a rant, and it was aimed at marketer’s preoccupation with what the latest bright shiny object was. At the time, it was social. My point was that true loyalty needs stabilization in habits to emerge. If you’re constantly chasing the latest thing, your audience will be in a constant state of churn. You’d be practicing “drive-by” marketing. If you want to find stability, target what your audience finds useful.

This post caused my friend Lance Loveday to ask a very valid question…”What about entertainment?” Do we develop loyalty to things that are entertaining? So, I started with a series of posts on the Psychology of Entertainment. What types of things do we find entertaining? How do we react to stories, or humor, or violence? And how do audiences build around entertainment? As I explored the research on the topic, I came to the conclusion that entertainment is a by-product of several human needs – the need to bond socially, the need to be special, our appreciation for others whom we believe to be special, a quest for social status and artificially stimulated tweaks to our oldest instincts – to survive and to procreate. In other words, after a long and exhausting journey, I concluded that entertainment lives in our phenotype, not our genotype. Entertainment serves no direct evolutionary purpose, but it lives in the shadows of many things that do.

So, what does this mean for stability of an audience for entertainment? Here, there is good news, and bad news. The good news is that the raw elements of entertainment haven’t really changed that much in the last several thousand years. We can still be entertained by a story that the ancient Romans might have told. Shakespeare still plays well to a modern audience. Dickens is my favorite author and it’s been 144 years since his last novel was published. We haven’t lost our evolved tastes for the basic building blocks of entertainment. But, on the bad news side, we do have a pretty fickle history when it comes to the platforms we use to consume our entertainment.

This then introduces a conundrum for the marketer. Typically, our marketing channels are linked to platforms, not content. And technology has made this an increasingly difficult challenge. While we may connect to, and develop a loyalty for, specific entertainment content, it’s hard for marketers to know which platform we may consume that content on. Take Dickens for example. Even if you, the marketer, knows there’s a high likelihood that I may enjoy something by Dickens in the next year, you won’t know if I’ll read a book on my iPad, pick up an actual book or watch a movie on any one of several screens. I’m loyal to Dickens, but I’m agnostic as to which platform I use to connect with his work. As long as marketing is tied to entertainment channels, and not entertainment content, we are restricted to targeting our audience in an ad hoc and transitory manner. This is one reason why brands have rushed to use product placement and other types of embedded advertising, where the message is set free from the fickleness of platform delivery challenges. If you happen to be a fan of American Idol, you’re going to see the Coke and Ford brands displayed prominently whether you watch on TV, your laptop, your tablet or your smartphone.

It’s interesting to reflect on the evolution of electronic media advertising and how it’s come full circle in this one regard. In the beginning, brands sponsored specific shows. Advertising messages were embedded in the content. Soon, however, networks, which controlled the only consumption choice available, realized it was far more profitable to decouple advertising from the content and run it in freestanding blocks during breaks in their programming. This decoupling was fine as long as there was no fragmentation in the channels available to consume the content, but obviously this is no longer the case. We now watch TV on our schedule, at our convenience, through the device of our choice. Content has been decoupled from the platform, leaving the owners of those platforms scrambling to evolve their revenue models.

So – we’re back to the beginning. If we want to stabilize our audience to allow for longer-term relationship building, what are our options? Obviously, entertainment offers some significant challenges in this regard, due mainly to the fragmentation of platforms we use to consume that content. If we use usefulness as a measure, the main factor in determining loyalty is frequency and stability. If you provide a platform that becomes a habit, as Google has, then you’ll have a fairly stable audience. It won’t destabilize until there is a significant enough resetting of user expectations, forcing the audience to abandon habits (always very tough to do) and start searching for another useful tool that is a better match for the reset expectations. If this happens, you’ll be continually following your audience through multiple technology adoption curves. Still, it seems that usefulness offers a better shot at a stable audience than entertainment.

But there’s still one factor we haven’t explored – what part does social connection play? Obviously, this is a huge question that the revenue models of Facebook, Twitter, Snapchat and others will depend on. So, with entertainment and usefulness explored ad nauseum, in the series of posts, I’ll start tracking down the Psychology of Social connection.

Letting the Foxes into Journalism’s Hen(Hedgehog) House

First published March 27, 2014 in Mediapost’s Search Insider

fanhI am rooting for Nate Silver and fivethirtyeight.com, his latest attempt to introduce a little data-driven veracity into the murky and anecdotal world of journalism. But I may be one of the few, at least if we take the current backlash as a non-scientific, non-quantitative sample:

I have long been a fan of Nate Silver, but so far I don’t think this is working. – Tyler Cowen, Marginal Revolution

Nate Silver’s new venture may become yet another outlet for misinformation when it comes to the issue of human-caused climate change, Michael Mann, director of the Earth System Science Center at Pennsylvania State University.

Here’s hoping that Nate Silver and company up their game, soon. – Paul Krugman, NY Times

Krugman also states:

You can’t be an effective fox just by letting the data speak for itself — because it never does. You use data to inform your analysis, you let it tell you that your pet hypothesis is wrong, but data are never a substitute for hard thinking.

Now..Nate Silver doesn’t disagree with this. In fact, he says pretty much the same thing in his book, The Signal and the Noise:

The numbers have no way of speaking for themselves. We speak for them. We imbue them with meaning.

But he goes on,

Like Caesar, we may construe them in self-serving ways that are detached from their objective reality.

And it’s this construal that Silver is hoping to nip in the bud with FiveThirtyEight. In essence, he wants to do it by being a Fox, to borrow from Isaiah Berlin’s analogy.

‘The fox knows many things, but the hedgehog knows one big thing.’ We take a pluralistic approach and we hope to contribute to your understanding of the news in a variety of ways.

Silver thinks the media’s preoccupation with punditry is a dangerous thing. Pundits, whether they’re coming from the right or left, are Hedgehogs. They get paid for their expertise on “one big thing.” And the more controversial their stand, the more attention they get. This can lead to a dangerous spiral, as researcher Philip Tetlock found out:

What experts think matters far less than how they think. If we want realistic odds on what will happen next, coupled to a willingness to admit mistakes, we are better off turning to experts who embody the intellectual traits of Isaiah Berlin’s prototypical fox—those who “know many little things,” draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life.

Tetlock was researching how expertise correlated with the ability to make good predictions. What he found was that it was actually an inverse relationship. The higher the degree of expertise, the more likely the person in question was a hedgehog. Media pundits are usually extreme versions of hedgehogs, which not only have one worldview, but also love to talk about it. Nate Silver believes that to get an objective view of world events, you need to be a fox, first, but secondly; you should be a fox that’s good at sifting through data:

Conventional news organizations on the whole are lacking in data journalism skills, in my view. Some of this is a matter of self-selection. Students who enter college with the intent to major in journalism or communications have above-average test scores in reading and writing, but below-average scores in mathematics.

So, all this makes sense. The problem in Silver’s approach is that journalism is the way it is because that’s the way humans want it. While I applaud Silver’s determination to change it, he may be trying to push water up hill. Pundits exist not just because the media keeps pushing them in front of us – they exist because we keep listening. Humans like opinions and anecdotes. We’re not hardwired to process data and objectively rationalize. We connect with stories and we’re drawn to decisive opinion leaders. Silver will have to find some middle ground here, and that seems to be where the problems arise. The minute writers add commentary to data; they have to impose an ideological viewpoint. It’s impossible not to. And when you do that, you introduce a degree of abstraction.

The backlash against Fivethirtyeight.com generally falls into two camps: Foxes like Silver that have no problem with the approach but disagree with the specific data put forward and Hedgehogs that just don’t like the entire concept. The first camp may come onside as Silver and his team work out the inevitable hiccups in their approach. The second, which, it should be noted, have a large number of pundits in their midst, will never become fans of Silver and his foxlike approach.

In the end though, it really doesn’t matter what columnists and journalists think. It’s up to the consumers of news media. We’ll decide what we like better – hedgehogs or foxes.

The Psychology of Usefulness: A New Model for Technology Acceptance.

In the last post, I reviewed the various versions of the Technology Acceptance Model. Today, I’d like to introduce my own thoughts on the subject and a proposed new model.  But first, I’d like to introduce an entirely new model to the discussion.

Introduction of Sense Making

I like Gary Klein’s Theory of Sense Making – a lot! And in the area of technology acceptance, I think it has to be part of the discussion. It introduces a natural Bayesian rhythm to the process that I think provides a intuitive foundation for our decisions on whether or not we’ll accept a new technology.

Kleins-Data-Frame-Model-of-Sensemaking

Gary Klein et al – Sensemaking Model How Might “Transformational” Technologies and Concepts be Barriers to Sensemaking in Intelligence Analysis

Essentially, the Sense Making Model says that when we try to make sense of something new, we begin with some type of perspective, belief or viewpoint. In Bayesian terms, this would be our prior. In Klein’s model, he called it a frame.

Now, this frame doesn’t only give us a context in which to absorb new data, it actually helps define what counts as data. This is a critical concept to remember, because it dramatically impacts everything that follows. Imagine, for example, that you arrive on the scene of a car accident. If your frame was that of a non-involved bystander, the data you might seek in making sense of the situation would be significantly different than if your frame was that of a person who recognized one of the vehicles involved as belonging to your next-door neighbor.

In the case of technology acceptance, this initial frame will shape what types of data we would seek in order to further qualify our decision. If we start with a primarily negative attitude, we would probably seek data that would confirm our negative bias. The opposite would be true if we were enthusiastic about the adoption of technology. For this reason, I believe the creation of this frame should be a step in any proposed acceptance model.

But Sense Making also introduces the concept of iterative reasoning. After we create our frame, we do a kind of heuristic “gap analysis” on our frame. We prod and poke to see where the weaknesses are. What are the gaps in our current knowledge? Are there inconsistencies in the frame? What is our level of conviction on our current views and attitudes? The weaker the frame, the greater our need to seek new data to strengthen it. This process happens without a lot of conscious consideration. For most of us, this testing of the frame is probably a subconscious evaluation that then creates an emotional valence that will impact future behavior. On one extreme, it could be a strongly held conviction, on the other it would be a high degree of uncertainty.

If we decide we need more data, the Sense Making Model introduces another “Go/No Go” decision point. If the new data confirms our initial frame, we elaborate that frame, making it more complete. We fill in gaps, strengthen beliefs, discard non-aligned data and update our frame. If our sense making is in support of a potential action and we seem to be heading in the right direction with our data foraging, this can be an iterative process that continually updates our frame until it’s strong enough to push us over the threshold of executing that action.

But, if the new data causes serious doubt about our initial frame, we may need to consider “reframing,” in which case we’d have to seek new frames, compare against our existing one and potentially discard it in favor of one of the new alternatives. This essentially returns us to square one, where we need to find  data to elaborate the new frame. And there the cycle starts again.

This double loop learning process illustrates that a decision process, such as accepting a new technology, can loop back on itself at any point, and may do so at several points. More than this, it is always susceptible to a “reframing” incident, where new data may cause the existing frame to be totally discarded, effectively derailing the acceptance process.

Revisiting Goal Striving

I also like Bagozzi’s Goal Striving model, for reasons outlined in a previous post. I won’t rehash them here, except to say that this model introduces a broader context that is more aligned with the complexity of our typical decision process. In this case, our desire to achieve goals is a fundamental part of the creation of the original frame, which forms the starting point for our technology acceptance decision. In this case, the Goal Desire step, at the left side of the model, could effectively be the frame that then gets updated as we move from Goal Intention to Behavioral Desire and then once again as we move to Behavioral Intention. All the inputs shown in Bagozzi’s model, shown as both external factors (ie Group Norms) and internal factors (Emotions, etc) would serve as data in either the updating or reframing loops in Klein’s model.

Bagozzis-purchasing-behavior-adoption-model

A New Model

As the final step in this rather long process I’ve been dragging you through for the last several posts, I put forward a new proposed model for technology acceptance.

Slide1

I’ve attempted to include elements of Sense Making, Goal Striving and some of the more valuable elements from the original Technology Acceptance Models. I’ve also tried to show that this in an iterative journey – a series of data gathering and consideration steps, each one of which can result in either a decision to move forward (elaborate the frame) or move backwards to a previous step (reframe a frame). The entire model is shown below, but we’ll break it down into pieces to explore each step a little more deeply.

 

Setting the Frame

Gord Tam 1

The first step is to set the original frame, which is the Goal Intention. In this case, a goal is either presented to us, or we set the goal ourselves. The setting of this goal is the trigger to establish both a cognitive and emotional frame that sets the context for everything that follows. Factors that go into the creation of the Goal Intention can include both positive and negative emotions, our attitudes towards the success of the goal, how it will impact our current situation (affect towards the mean), and what we expect as far as outcomes. These factors will determine how  robust our Goal Intention is, which will factor heavily in any subsequent decisions that are made as part of this Goal Intention, including the decision to accept or reject any relevant technologies required to execute on our Goal Intention.

We can assume, because there is not an updating step shown here, that once the Goal Intention is formed, the person will move forward to the next step – the retrieval of internal information and the creation of our attitude towards the Goal to be achieved.

The Internal Update

Gord Tam 2

With the setting of the goal intention, we have our frame. Now, it’s up to us to update that frame. Again, our confidence in this initial frame will determine how much data we feel we need to connect to update our frame. This follows Herbert Simon’s heuristic rules of thumb for Bounded Rationality. If we’re highly confident in our frame (to the point where it’s entrenched as a belief) we’ll seek little or no data, and if we do, the data we seek will tend to be confirmatory. If we’re less confident in our frame, we’ll actively go and forage for more data, and we’ll probably be more objective in our judgement of that data. Again, remember, Klein’s Sense Making model says that our frame determines what we define as data.

The first update will be a heuristic and largely subconscious one. We’ll retrieve any relevant information from our own memory. This information, which may be positive or negative in nature, will be assembled into an “attitude” towards the technology. This is our first real conscious evaluation of the technology in question. This would be akin to a Bayesian “prior” – a starting point for subsequent evaluation. It also represents an updating of the original frame. We’ve moved from Goal Intention to a emotional judgement of the technology to be evaluated.

The creation of the “Attitude” also requires us to begin the Risk/Reward balancing, similar to Charnov’s Marginal Value Theorem used in optimal foraging. Negative items we retrieve increase risk, positive ones increase reward. The balance between the two determine our next action. From this point forward, each updating of the frame leads us to a new decision point. At this decision point, we have to decide whether we move forward (elaborate our frame) or return to an earlier point in the decision process, with the possibility that we may need to reframe at that point. Each of these represents a “friction point” in the decision process, with reward driving the process forward and risk introducing new friction. At the attitude state, excessive risk may cause us to go all the way back to reconsidering the goal intention. Does the goal as we understand it still seem like the best path forward, given the degree of risk we have now assigned to the execution of that goal?

Let’s assume we’ve decided to move forward. Now we have to take that Attitude and translate it into Desire. Desire brings social aspects into the decision. Will the adoption of the technology elevate our social status? Will it cause us to undertake actions that may not fit into the social norms of the organization, or square well with our own social ethics? These factors will have a moderating effect on our desire. Even if we agree that the technology in question may meet the goal, our desire may flag because of the social costs that go along with the adoption decision. Again, this represents a friction point, where our desire may be enough to carry us forward, or where it may not be strong enough, causing us to re-evaluate our attitude towards the technology. If we bump back to the “Attitude” stage, a sufficiently negative judgement may in turn bump us even further back to goal intention.

The External Update

Gord TAM 3

With the next stage, we’ve moved from Desire to Intention. Up to now the process has been primarily internal and also primarily either emotional or heuristic. There has been little to no rational deliberation about whether or not to accept the technology in question. The frame that has been created to this point is an emotional and attitudinal frame.

But now, assuming that this frame is open to updating with more information, the process becomes more open to external variables and also to the input of data gathered for the express purpose of rational consideration. We start openly canvassing the opinions of others (subjective norm) and evaluating the technology based on predetermined factors. In the language of marketing, this is the consumer’s “consideration” stage. We know the next step is Action – where our intention becomes translated into behavior. In the previous TAM models, this step was a foregone conclusion. Here, however, we see that it’s actually another decision friction point. If the data we gather doesn’t support our intention, action will not result. We will loop back to Goal Intention and start looking for alternatives. At the very least, this one stage may loop back on itself, resulting in iterative cycles of setting new data criteria, gathering this data and pushing towards either a “go” or “no go” decision. Only when there is sufficient forward momentum will we move to action.

Here, at the Action stage, our evaluation will rely on experiential feedback. At this point, we resurrect the concepts of “Ease of Use” and “Perceived Usefulness” from previous versions of TAM. In this case, the Intention stage would have constructed an assumed “prior” for each of these – a heuristic assessment of how easy it will be to use the technology and also the usefulness of it. This then gets compared to our actual use of the technology. If the bar of our expectations is not met, the degree of friction increases, holding us back from repeating the action, which is required to entrench it as a behavior. This will be a Charnovian balancing act. If the usefulness is sufficient, we will put up with a shortfall in the perceived ease of use. On the flip side, no matter how easy the tool is to use, if it doesn’t deliver on our expectation of usefulness, it will get rejected. Too much friction at this point will result in a loop back to the Intention stage (where we may reassess our evaluation of the technology to see if the fault lies with us or with the tool) and will possibly cause a reversion all the way to our Goal Intention.

If our experience meets our expectation, repetition will begin to create an organizational behavior. At this stage, we move from trial usage to embedding the technology into our processes. At this point, organizational feedback becomes the key evaluative criterion. Even if we love the technology, sufficient negative feedback from the organization will cause us to re-evaluation our intention. Finally, if the technology being evaluated successfully navigates past this chain of decision points without becoming derailed, it becomes entrenched. We then evaluate if it successfully plays its part in our attainment of our goals. This brings up full circle, back to the beginning of the process.

Summing Up

The original goal of the Technology Acceptance Model was to provide a testable model to predict adoption. My goal is somewhat different, showing Technology Adoption as a series of Sense Making and Goal Attainment decisions, each offering the opportunity to move forward to the next stage or loop back to a previous stage. In extreme cases, it may result in outright rejection of the technology. As far as testing for predictability, this is not the parsimonious model envisioned by Venkatesh, but then again, I suspect parsimony was sacrificed even by the Venkatesh and contributing authors somewhere between the multiple revisions that were offered.

This is a model of Bayesian decision making, and I believe it could be applied to many considered decision scenarios. One could map most higher end consumer purchases on the same decision path. The value of the model is in understanding each stage of the decision path and the factors that both introduce risk related friction and reward related momentum. Ideally, it would be fascinating to start to identify representative risk/reward thresholds at each point, so factors can be rebalanced to achieve a successful outcome.

As we talk about the friction in these decision points, it’s also important to remember that we will all have different set points about how we balance risk and reward. When it comes to technology acceptance, our set point will determine where we fall on Everett Roger’s Diffusion of Technology distribution curve.

 

Those with a high tolerance for risk and an enhanced ability to envision reward will fall to the far left of the curve, either as Innovators or Early Adopters. Rogers noted in The Diffusion of Innovation:

Innovators may…possess a type of mental ability that better enables them to cope with uncertainty and to deal with abstractions. An innovator must be able to conceptualize relatively abstract information about innovations and apply this new information to his or her own situation

Those with a low tolerance for risk and an inability to envision rewards will be to the far right, falling into the Laggard category. The rest of us, representing 68% of the general population, will fall  somewhere in between. So, in trying to predict the acceptance of any particular technology, it will be important to assess the innovativeness of the individual making the decision.

This hypothetical model represents a culmination of the behaviors I’ve observed in many B2B adoption decisions. I’ve always stressed the importance of understanding the risk/reward balance of your target customers. I’ve also mapped out how this can vary from role to role in organizational acceptance decisions.

This post, which is currently pushing 3000 words, is lengthy enough for today. In the next post, I’ll revisit what this new model might mean for our evaluation of usefulness and subsequent user loyalty.

The Bug in Google’s Flu Trend Data

First published March 20, 2014 in Mediapost’s Search Insider

Last year, Google Flu Trends blew it. Even Google admitted it. It over predicted the occurrence of flu by a factor of almost 2:1.  Which is a good thing for the health care system, because if Google’s predictions had have been right, we would have had the worst flu season in 10 years.

Here’s how Google Flu Trends works. It monitors a set of approximately 50 million flu related terms for query volume. It then compares this against data collected from health care providers where Influenza-like Illnesses (ILI) are mentioned during a doctor’s visit. Since the tracking service was first introduced there has been a remarkably close correlation between the two, with Google’s predictions typically coming within 1 to 2 percent of the number of doctor’s visits where the flu bug is actually mentioned. The advantage of Google Flu Trends is that it is available about 2 weeks prior to the ILI data, giving a much needed head start for responsiveness during the height of flu season.

FluBut last year, Google’s estimates overshot actual ILI data by a substantial margin, effectively doubling the size of the predicted flu season.

Correlation is not Causation

This highlights a typical trap with big data – we tend to start following the numbers without remembering what is generating the numbers. Google measures what’s on people’s minds. ILI data measures what people are actually going to the doctor about. The two are highly correlated, but one doesn’t not necessarily cause the other. In 2013, for instance, Google speculated that increased media coverage might be the cause for the overinflated predictions. More news coverage would have spiked interest, but not actual occurrences of the flu.

Allowing for the Human Variable

In the case of Google Flu Trends, because it’s using a human behavior as a signal – in this case online searching for information – it’s particularly susceptible to network effects and information cascades. The problem with this is that these social signals are difficult to rope into an algorithm. Once they reach a tipping point, they can break out on their own with no sign of a rational foundation. Because Google tracks the human generated network effect data and not the underlying foundational data, it is vulnerable to these weird variables in human behavior.

Predicting the Unexpected

A recent article in Scientific American pointed out another issue with an over reliance on data models –  Google Flu Trends completely missed the non-seasonal H1N1 pandemic in 2009. Why? Algorithmically, Google wasn’t expecting it. In trying to eliminate noise from the model, they actually eliminated signal coming during an unexpected time. Models don’t do very well at predicting the unexpected.

Big Data Hubris

The author of the Scientific American piece, associate editor Larry Greenemeier, nailed another common symptom of our emerging crush on data analytics – big data hubris. We somehow think the quantitative black box will eliminate the need for more mundane data collection – say – actually tracking doctor’s visits for the flu. As I mentioned before, the biggest problem with this is that the more we rely on data, which often takes the form of arm’s length correlated data, the further we get from exploring causality. We start focusing on “what” and forget to ask “why.”

We should absolutely use all the data we have available. The fact is, Google Flu Trends is a very valuable tool for health care management. It provides a lot of answers to very pertinent questions. We just have to remember that it’s not the only answer.

Can Facebook Maintain High Ground?

 First published March 13, 2014 in Mediapost’s Search Insider

SnapchatPicAs I said in my last column, Facebook’s recent acquisition spree seems to indicate that they’re trying to evolve from being our Social Landmark to being a virtual map that guides us through our social activity. But, as Facebook rolls out new features or acquires one-time competitors in order to complete this map of the social landscape, will we use it?  Snapchat CEO Evan Spiegel apparently doesn’t think so. That’s part of the reason he turned down $3 billion from Facebook.

At the end of 2012, Mark Zuckerberg paid Spiegel and his team a visit. The purpose of the visit was to scare the bejeezus out of Snapchat by threatening to crush them with the roll out of Poke.  Of course, we now know that Poke was a monumental flop while Snapchat rolled along quite nicely, thank you.  Several months later, Zuck flew out to meet with the Snapchat team again, taking a decidedly different tone this time. He also brought along a very big checkbook.  Snapchat said thanks, but no thanks.

So, how can a brash start up like Snapchat beat the 800 lb Gorilla in it’s own back yard? Why was Poke DOA? Was it a one-of-a-kind miscue on the part of Facebook – or part of a trend?

Part of the answer may lie in how we feel about novelty vs familiarity in the things we deal with. As I said in the last column, we go through 3 stages when we explore new landscapes. We move from navigating by landmarks to memorizing routes and finally, we create our own mental maps of the space, allowing us to plot our own routes as needed. It we apply this to navigating a virtual space like the online social sphere, we should move from relying on landmarks (like Facebook) to using routes (single purpose apps like Snapchat) and finally, to creating our own map that allows us to switch back and forth between apps as required.  Facebook wants to jump from the first stage to the last in order to remain dominant in the social market maintaining our map for us by becoming a hub for all required social functionality. But if the Poke story is any indication, we may not be willing to go along for the ride.

But there’s a subtle psychological point to how we learn to navigate new landscapes – we gain mastery over our environment. With this increased confidence comes a reluctance to feel we’re moving backward. We tend to discard the familiar and embrace novelty as we gain confidence. This squares with research done in the familiarity and novelty seeking in humans. We look for familiarity in things that have high degrees of risk, in the faces of others around us or when we’re operating on autopilot. But when we’re actively considering and judging options and looking for new opportunities, we are drawn to new things.

Humans are natural foragers. We have built in rules of conduct when we go out seeking things that will improve our lot, whether it be food, shelter or tools. Ideally, we look for things that will offer us a distinct advantage over the status quo with a reasonable investment of effort. We balance the two – advantage against effort. If the new options come from a overly familiar place, we tend to mentally discount the potential advantage because we no longer feel we’re moving forward. Over time, this builds into a general feeling of malaise towards the overly familiar.

Time will tell if Evan Spiegel was prescient or just plain stupid in turning down Facebook’s offer. The question is not so much will Facebook prevail, but rather will Snapchat end up emerging as a key part of the social landscape on a continuing basis? That particular landscape is notoriously unstable and it’s been known to swallow up many, many other companies with nary a burp.  Perhaps Spiegel should have taken the money and ran.

But then I wouldn’t be betting the farm on Facebook’s chances of permanence either.

The Psychology of Usefulness – Part Five: A Recap

0824_lifestyle_luddite_630x420In the past five posts, I’ve been looking at how we choose to accept new technologies. As part of that, we’ve had a fairly exhaustive review of the various versions of the Technology Acceptance Models proposed by Fred Davis, Richard Bagozzi and, most prolifically, Viswanath Venkatesh.

Before forging ahead, I’d like to provide a brief recap of primary thoughts behind the models.

In the first post, I explored the different between autotelic and exotelic activities. The first we do for the sheer enjoyment of the activity itself – our reward is inherent in the doing. Exotelic activities are the things we do because we have to. There is little to no reward in them. Generally, when we’re judging usefulness, it’s to complete an exotelic activity. In judging usefulness, the emotion most commonly invoked is an aversion to risk – so it carries a negative emotional valency, although a relatively mild one – typically invoking anxiety or concern rather than outright fear or dread. The degree of emotional valency is generally quite low – it’s more a calculation of the resources required vs the usefulness expected.

Next, in the second post, I explained why I believe that our judgement of usefulness is based on a fairly heuristic calculation of the brain. It’s similar to the same mechanisms we use when foraging for food. Because of that belief, I’ve borrowed heavily from the previous work done by Pirolli and Card on Information Foraging and also Eric Charnov’s work on Optimal Foraging and his Marginal Value Theorem.

Because there’s little emotional engagement, we also tend to make useful resources habits if the frequency is high enough.This is the ground I covered in posts Three and Four. First, using Google as an example, I looked at how habits are created and maintained. Then, in the next post, I looked at the factors that might disrupt a habit, forcing us to look for a viable alternative. The more the brain has to be involved in judging usefulness – the less loyal we tend to be.

Also, we will only seek new technologies if: a) our current technology no longer meets our expectations, which are often reset because; b) we’ve become aware of a new, superior technology.

Then, we have to decide whether or not to accept a new technology. There have been several attempts to create a model that can predict the acceptance of a new technology. Most relied on the same foundational assumptions:

  • Some mix of internal and external motivators will result in the creation of an attitude.
  • Depending on the valence of the attitude (either negative or positive) we may form an intention to use the technology.
  • Once this intention is formed, it leads to usage.

All the modifications to the model (5 revisions at last count) focused in the first two of these three assumptions, offering alternatives for the motivators that create the attitude. Some versions removed the attitude step completely and moved directly to intention. But none of them changed the assumed progression from intention to usage.

The useful parts of these models that I wanted to carry forward are:

  • Intentions are formed by a heuristic balancing of negative and positive factors in the adopter’s mind, often labeled Perceived Usefulness and Perceived Ease of Use.
  • External factors, such as the opinions of others, impact our decisions to adopt a technology.
  • The cognitive process involved roughly corresponds to a Bayesian analysis, where we set a “prior” – our original attitude, and update it based on new information gathered through the decision process.

The potentially flawed assumptions I would like to leave behind are:

  • The process is typically a linear one, moving from the left of the model (attitude) to the right of the model (usage).
  • There are no mediating factors between the intention and usage boxes in any of the models.

In 2007, one of the original authors of the first TAM, Bagozzi, said it was time for a paradigm shift in the thinking about technology acceptance. He brought in an entirely new context in which to think about the acceptance of technology – the striving for and achievement of goals. This created a more holistic view of the decision process, where the acceptance of technology wasn’t artificially isolated, but was part of a much broader frame where that acceptance was contingent on a hierarchy of goals and sub goals. What I particularly liked was the addition of “desire” as a step, and also the introduction of self regulation as a mediating factor. Bagozzi was the first to indicate that the process was possibly more recursive, an iterative cycle rather than a linear path.

Bagozzi’s inclusion of goal setting and achievement builds a context for adoption. This aligns with Everett Roger’s extensive work in innovation adoption, in which he said,

An important factor regarding the adoption rate of an innovation is its compatibility with the values, beliefs, and past experiences of individuals in the social system.

While the acceptance of a technology may be a personal decision, it is almost always set within a broader social context. All the versions   of Technology Acceptance Models I looked at included some type of social mediation in the acceptance process. But it was more of a factor in the creation of an initial attitude and a mediating factor in the progression from attitude to intention to behavior. In other words, if the acceptance of a technology made you socially unpopular, you would probably change your mind and reject it.

But when we choose to achieve a goal, there is a cognitive process that happens which creates a framework for acceptance. The goal becomes the primary evaluative topic and the technology generally becomes secondary to it. Bagozzi recognized that the two are interlinked and have to be evaluated together. We choose a goal, divide this into sub-goals and then seek how to execute against these goals.

Let’s use a personal example to see how goals and technology acceptance are intrinsically linked. Let’s say our goal is to get healthier. This breaks down into several sub-goals: Beginning a regular exercise routine, eating better, losing weight, drinking less, etc. Each of these then can be further divided into more specific goals. Let’s take eating better. It could involve tracking our calories, paying more attention to nutrition labels, including more fresh fruits and vegetables in our diet, cutting out sugar and avoiding processed foods. At this point, we may decide to use a tool like Livestrong‘s MyPlate Calorie tracker. If you were to use one of the various versions of TAM to predict the acceptance of this new technology, you would artificially divorce the act of acceptance from the broader goal hierarchy that precedes it. According to TAM, your acceptance of MyPlate would be determined by your evaluation of the ease of use and the expected usefulness. While undoubtedly important, these two factors are completely dependent on the mental scaffolding you’ve built around the idea of getting healthier. There are a myriad of factors that live beyond these that would have some impact on your eventual acceptance or rejection of the technology in question. For example, perhaps you decide that calorie counting is not the best path to eating better and so any tool that counts calories gets rejected out of hand. Or, perhaps you fall off the wagon with your eating plan and reject the tool not because it’s not useful, or easy to use, but simply because counting calories constantly reminds you how weak your will power is.

So, with the past posts recapped, next post we’ll forge forward with a proposed new Technology Acceptance Model.

Finding Our Way in the Social Landscape

First published March 6, 2014 in Mediapost’s Search Insider

social-mediaLast month, Om Malik (of GigaOM fame) wrote an article in Fast Company about the user backlash against Facebook.  To be fair, It seems that what’s happening to Facebook is not so much a backlash as apathy. You have to care to lash back. This is more of a wholesale abandonment, as millions of users are going elsewhere – using single purpose apps to get their social media fix. According to the article,

“we cycle between periods in which we want all of our Internet activity consolidated and other times in which we want a bunch of elegant monotaskers. Clearly we have reentered a simplification phase.”

There’s a reason why Facebook has been desperately trying to acquire Snapchat for a reported $3 Billion. There’s also a reason why they picked up Instagram for a billion last year.  It’s because these simple little apps are leaving the home grown Facebook alternatives in the dust. Snapchat is killing Facebook’s Poke – as Mashable pointed out in this comparison.  Snapchat has consistently stayed near the top of App Annie’s most popular download chart for the past 18 months. This coincides exactly with Facebook’s release of Poke.

Screen-Shot-2013-11-14-at-11.29.55-AM

Download rates of Facebook Poke

Screen-Shot-2013-11-14-at-12.10.02-PM

Download rates of Snapchat

Malik indicates it’s because we want a simpler, streamlined experience. A recent article in Business Insider goes one step further – Facebook is just not cool anymore. The mere name induces extended eye rolling in teenagers. It’s like parking the family mini-van in the high school parking lot.  “I hate Facebook. It’s just so boring,” said one of the teens interviewed. Hate! That’s a pretty strong word. What did the Zuck ever do to garner such contempt? Maybe it’s because he’s turning 30 in a few months. Maybe it’s because he’s an old married man.

Or maybe it’s just that we have a better alternative. Malik has a good point. He indicates that we tend to oscillate between consolidation and specialization. I take a bit different view. What’s happening in social media is that we’re getting to know the landscape better. We’re finding our way. This isn’t so much about changing tastes as it is about increased familiarity and a resetting of expectations.

If you look at how humans navigate new environments, you’ll notice some striking similarities. When we encounter a new landscape, we go through three phases of way finding. We begin with relying on landmarks. These are the “highest ground” in a new, unfamiliar landscape and we navigate relative to them. They become our reference points and we don’t stray far from them. Facebook is, you guessed it, a landmark.

The next phase is called “Route Knowledge.” Here, we memorize the routes we use to get from landmark to landmark. We become to recognize the paths we take all the time. In the world of online landscapes, you could substitute the word “app” for “route.” Instagram, Snapchat, Vine and the rest are routes we use to get where we need to go quickly and easily. They’re our virtual “short cuts.”

The last stage of way finding is “Survey Knowledge.”  Here, we are familiar enough with a landscape that we’ve acquired a mental “map” of it and can mentally calculate alternative routes to get to our destination. This is how you navigate in your hometown.

What’s happening to Facebook is not so much that our tastes are swinging. It’s just that we’re confident enough in our routes/apps that we’re no longer solely reliant on landmarks.  We know what we want to do and we know the right tool to use. The next stage of wayfinding, Survey Knowledge, will require some help, however. I’ve talked in the past about the eventual emergence of meta-apps. These will sit between us and the dynamic universe of tools available. They may be largely or even completely transparent to us. What they will do is learn about us and our requirements while maintaining an inventory of all the apps at our disposal. Then, as our needs arise, it will serve up the right app for the job. These meta-apps will maintain our survey knowledge for us, keeping a virtual map of the online landscape to allow us to navigate at will.

As Facebook tries to gobble up the Instagrams and Snapchats of the world, they’re trying to become both a landmark and a meta-app. Will they succeed? I have my thoughts, but those will have to wait until a future column.

The Psychology of Usefulness – The Acceptance of Technology – Part Four

After Venkatesh and Davis released the TAM 2 model, Venkatesh further expanded the variables that went into our calculation of Perceived Usefulness in TAM 3:

TAM3

Venkatesh, V. and Bala, H. “TAM 3: Advancing the Technology Acceptance Model with a Focus on Interventions,”

Venkatesh divided the determinants of Perceived Ease of Use into two categories: Anchor determinants and Adjustment determinants. Anchor determinants where the user’s baseline and came from their general beliefs about computer and usage. In Bayesian terms, this would be the user’s “prior.” It would create the foundational attitude towards the technology in question.

Anchor determinants included:

Computer self efficacy – How proficient the user is with the current technology paradigm (i.e. how comfortable they are with computers)

Perception of External Control – How much organizational support there is for the system to be accepted?

Computer Anxiety – Is there apprehension or fear involved with using a computer?

Computer Playfulness – Spontaneity in computer interactions.

Then, Venkatesh added the Adjustment determinants. These factors come from direct experience with the technology in question and are used to “adjust” the user’s attitude towards the technology. Again, this is a very Bayesian cognitive process.

Adjusting Determinants included:

Perceived Enjoyment – Is using the system enjoyable?

Objective Usability – Is the effort actually required what it was perceived to be (resulting in either positive or negative reinforcement)

Over time some of the Anchor factors will diminish in importance (Playfulness, Anxiety) and adjustments will become stronger.

So now, in TAM 3, we have essentially the same process of acceptance, but with much more granularity in the definition of the determinants that go into the Perceptions of Ease of Use. However, with the division of determinants into the categories of “Anchor” and “Adjustment” Venkatesh starts to hit at the iterative nature of this process of acceptance. We create a baseline belief or attitude and then this becomes updated, either though external forces (the Subjective Norm) or our own internal experiences (the Adjusting Determinants). While the model indicates a linear decision path it’s now appearing likely that it’s a more recursive path.

As an interesting aside, Brown, Massey, Motoya-Weiss and Burkman (2002) said that they found variance in the importance of Perceived Ease of Use. Remember, in the original TAM model, Davis indicated that while Perceived Ease of Use does have impact over the original attitude towards a technology, he found that Perceived Usefulness is a more powerful indicator of usage intention. Brown et al found that this varies depending on whether the acceptance of a technology is mandatory or voluntary. If acceptance is mandatory, they found that Perceived Ease of Use may actually have a more important impact on system acceptance.

After TAM 3, there was a further attempt to round up the competing theories that all contributed to the evolution of TAM. This was the hopefully named Unified Theorty of Acceptance and Use of Technology (or UTAUT) – proposed in 2003 by Venkatesh, Morris, Davis and Davis:

UTAUT

Venkatesh, V., Morris, M.G., Davis, F.D., and Davis, G.B. “User Acceptance of Information Technology: Toward a Unified View,” MIS Quarterly, 27, 2003, 425-478

So, what began as an attempt to simplify our understanding of the acceptance of technology became a rather unwieldy beast. Bagozzi, who worked with Davis on the original TAM model, finally had step back in and comment:

The exposition of UTAUT is a well-meaning and thoughtful presentation. But in the end we are left with a model with 41 independent variables for predicting intentions and at least eight independent variables for predicting behavior. Even here, arguments can be made that important independent variables have been left out, because few of the included predictors are fundamental, generic or universal and future research is likely to uncover new predictors not subsumable under the existing predictors. The IS field risks being overwhelmed, confused and misled but the growing piecemeal evidence behind decision making and action in regard to technology adoption/acceptance/rejection.

Bu the biggest criticism of Venkatesh’s models from Bagozzi (2007)  pointed out the same fundamental flaw that I mentioned – the assumption that intention leads to usage. Here are Bagozzi’s main concerns with the evolution of TAM:

Is Behavior the End Goal? – Bagozzi says:

The models…fail to consider that many actions are taken not so much as ends in and of themselves but rather as means to more fundamental ends or goals.

TAM ends at behavior. Actually, it ends at intent, as it assumes that intent always leads to behavior, but we’ll come back here in a moment. Bagozzi’s point is that behavior is dependent on a broader context, and there could be an end-goal that will impact acceptance that is completely ignored in the model. For example, lets say that the specific technology to be accepted is a tool to analyze conversion data in online campaigns. The end goal is to improve ROI from all online campaigns. But, to reach this end goal, there are a number of contributory goals, including, but not limited to:

  • More efficient budget allocations
  • Improving Landing Page performance
  • Better tracking of performance data
  • Improve click throughs on online ads
  • Improve the online conversion path

The tool in question is a subset of the third item in the list. Because the decision to accept this tool is contingent on meeting a number of broader goals, it obviously becomes a critical factor in that acceptance. But Bagozzi’s point is the only place this is accounted for, presumably, is in the anticipated belief upstream in the model. Once again, TAM falls victim to its own quest for parsimony. Bagozzi argues a better approach is to understand that this is a process, and as such will include goal striving. And that is a recursive process:

In goal striving, intention formation is succeeded by planning (e.g, when, where and how to act instrumentally), overcoming obstacles, resisting temptations, monitoring progress to goal achievement, readjusting actions, maintaining effort and willpower, and reassessing and even changing goals and means. These processes fill the gaps between intention and behavior and between behavior and goal attainment and are crucial for the successful adoption and use of technology.

Finally! An acknowledgement that it’s not a straight line from intention to behavior!

The Gap between Attitude and Intention – First of all, Bagozzi disagreed with the elimination of Attitude as a preliminary step to Intention. Further, he says that even with Attitude back in place, there may be very compelling reasons why a person may agree that the technology is acceptable, but still choice not to accept it. To fill in the gap, he proposes borrowing from the Belief-Desire model. Here, even if the Belief is in place, there also needs to be Desire before the intention to act is formed. So, to recap, a User may have all the right determinants in place to decide that the technology in question is perceived to be useful (PU) and that it will be sufficiently easy to learn to use (PEU) but still not have any desire to accept the technology. Perhaps the decision to accept was made by her boss, who is an asshole and she’s resisting on principle.

In Bagozzi’s paper, he goes on at some length to outline the limitations in any of the proposed TAM models. In addition to the above points, he also suggests that things like the group, cultural and social aspects of technology acceptance are not adequately dealt with in the model, as well as emotions, self-regulation and other mediators that are common in our pursuit of goals. In short, he says that technology acceptance is too reliant on context to lend itself to general models and the decision path itself is much more complex than the models would indicate. He advocates a new foundation, based on his work of goal setting and striving:

37_10_1108_S1548-6435_2011_0000008005

Bagozzi 2007

This lays out the decision making core process, not specific to technology acceptance, but applicable to the striving towards any goal. But, in this model, there is the opportunity to “plug in” factors specific to the acceptance of a technology within a specific context. For example, inputs into Goal Desire could include any number of things: first of all, the goal itself (taking into account the entire goal hierarchy that leads to the focal goal – the acceptance of a specific technology), anticipated and anticipatory emotions, relative advantage, job fit, attitudes toward success, failure, outcome expectancies, and, of course, Perceived Usefulness and Perceived Ease of Use. The exact balance will be contingent on circumstance.

The arrow leading up to Action Desire represents mediating factors such as group norms, subjective norms, social identity, effort expectancy and attitudes towards an act.

A key addition to Bagozzi’s proposed model  is self regulation. Somewhere between desire and intention, humans have the ability to reflect on their desires and decide if they are comfortable intending to act on that desire. Specifically, in the case of technology adoption, we have to decide if the behaviors we would undertake would sit well with our moral and belief framework. Let’s say, for instance, that the technology adoption being considered would increase efficiency dramatically, allowing the company to decrease head count. You may be aware of the human cost of the decision to adopt and this may cause you to regulate your desire (increased efficiency) against it.

Below is an expanded version of Bagozzi’s model with all the inputs shown.

decisioncore

Bagozzi’s proposed model with inputs shown (Bagozzi 2007)

What is interesting in Bagozzi’s model is the chain of decision making which separates Goals desire from Behavioral desire. This is a further exploration of the transition from Attitude to Action, which was so deterministic in Venkatesh’s models. Bagozzi sets it in what, to me, is a more palatable framework. We have goals, which likely include broad goals and sub goals in some type of hierarchy. Our desire to reach these goals then have to be translated into intentions, where the actual execution required is begun to be planned out. This helps us understand the required behaviors, which then leads to behavioral desires. These desires then get translated into intentions. But throughout this chain, there are a number of mediating factors, both internal and external, that can cause reflection, resetting, outright abandonment or modification of both desires and intentions. There is no single arrow pointing to the right. There is, instead, an iterative process that allows for looping back to any one of the previous stages.

This post has become much longer and much more technical than I originally intended, so I think we’ll wrap this up and start fresh next time with a recap of the various models of Technology Acceptance, and my attempt to build a model that allows for iterative reflection and adjustment.

The Psychology of Usefulness: The Acceptance of Technology – Part Three

In Part Two of this series, I looked at Davis and Bagozzi’s Technology Acceptance Model, first proposed in 1989.

Technology_Acceptance_Model

As I said, while the model was elegant and parsimonious, it seems to simplify the realities of technology acceptance decisions too much. In 2000, Venkatesh and Davis tried to deal with this in TAM 2 – the second version of the Technology Acceptance Model.

TAM2

In this version, they added several determinants of Perceived Usefulness and demoted Perceived Ease of Use to being just one of the factors that impacted Perceived Usefulness.  Impacting this mental calculation were two mediating factors: Experience and Voluntariness. This rebalancing of factors provides some interesting insights into the mental process we go through when making a decision whether we’ll accept a new technology or not.

Let’s begin with the determinants of Perceived Usefulness in the order they appear in Venkatesh and Davis’s model:

Subjective Norm: TAM 2 resurrects one of the key components of the original Theory of Reasoned Action model – the opinions of others in your social environment.

Image: Venkatesh and Davis also included another social factor in their list of determinants – how would the acceptance of this technology impact your status in your social network? Notice that our calculation of the image enhancement potential has the Subjective Norm as an input. It’s a Bayesian prediction – we start with our perceived social image status (the prior) and adjust it based on new information, in this case the acceptance of a new technology.

Job Relevance: How applicable is the technology to the job you have to do?

Output Quality: How will this technology impact your ability to perform your job well?

Result Demonstrability: How easy is it to show the benefits of accepting the technology?

It’s interesting to note how these factors split: the first two (subjective norm and image) being related to social networks, the next two (Job Relevance and Output Quality) being part of a mental calculation of benefit and the last one, Demonstrability, bridging the two categories: How easy will it be to show others that I made the right decision?

According to the TAM 2 model, we use these factors, which combine practical task performance considerations and social status aspirations, into a rough calculation of the perceived usefulness of a technology. After this is done, we start balancing that with how easy we perceive the new technology to be to use. Venkatesh and Davis commented on this and felt that Perceived Ease of Use has a variable influence in two areas, the forming of an attitude towards the technology and a behavioral intention to use the technology. The first is pretty straight forward. Our attitude is our mental frame regarding the technology. Again, to use a Bayesian term, it’s our prior. If the attitude is positive, it’s very probably that we’ll form a behavioral intention to use the technology. But there are a few mediating factors at this point, so let’s take a closer look at the creation of Behavioral Intention..

In forming our intention, Perceived Ease of Use is just one of the determinants we use in our “Usefulness” calculation, according to the model. And it depends on a few things. It depends on efficacy – how comfortable we judge ourselves to be with the technology in question. It also depends on what resources we feel we will have access to to help us up the learning curve. But, in the forming of our attitude (and thereby our intention), Venkatesh and Davis felt that Perceived Usefulness will typically be more important than Perceived Ease of Use. If we feel a technology will bring a big enough reward, we will be willing to put up with a significant degree of pain. At least, we will in what we intend to do. It’s like making a New Year’s Resolution to lose weight. At the time we form the intention, the pain involved is sometime in the future, so we go forward with the best of intentions.

As we move forward from Attitude to Intention, this transition if further mediated in the model by our subjective norm – the cognitive context we place the decision in. Into this subjective norm falls our experience (our own evaluation of our efficacy), the attitudes of others towards the technology and also the “Voluntariness” of the acceptance. Obviously, our intention to use will be stronger if it’s a non-negotiable corporate mandate, as opposed to a low priority choice we have the latitude to make.

What is missing from the TAM 2 model is the link between Perceived Ease of Use and actual Usage. Just like a New Year’s Resolution, intentions don’t always become actions. Venkatesh and Davis said Perceived Ease of Use is a moving, iteratively updated calculation. As we gain hands-on experience, we update our original estimate of Ease of Use, either positively or negatively. If it’s positive, it’s more likely that Intention will become Usage. If negatively, the technology may fail to become accepted. In fact, I would say this feedback loop is an ongoing process that may repeat several times in the space between Intention and Usage. The model, with a single arrow going in one direction from Intention to Usage, belies the complexity of what is happening here.

Venkatesh and Davis wanted to create a more realistic model, expanding the front end of the model to account for determinants going into the creation of Intention. They also wanted to provide a model of the decision process that better represented how we balance Perceived Usefulness and Perceived Ease of Use. I think they made some significant gains here. But the model is still a linear one – going in one direction only. What they missed is the iterative nature of acceptance decisions, especially in the gap between Intention and Behavior.

In Part Four, we’ll look at TAM 3 and see how Venkatesh further modified his model to bring it closer to the real world.

So, Six Seconds is the Secret, Huh?

First published February 13, 2014 in Mediapost’s Search Insider

oreo-superbowl-blackout-adApparently, the new official time limit for customer engagement is 6 seconds, according to a recent post on Real Time Marketing. How did we come up with 6? Well, in the world of social media engagement it seemed like a good number and no one has called bull shit on it yet, so 6 it is

Marketers love to talk about time – just in time, real time, right time. At the root of all this “time talk” is the realization that customers really don’t have any time for us, so we have to somehow jam our messages into the tiny little cracks that may appear in the wall of willful ignorance they carefully build against marketing. The marketer’s goal is to erode their defenses by looking for any weakness that may appear.

Look at the supposed poster child for Real Time Marketing – the Oreo coup staged during the black out in the 2013 Super Bowl. Because the messaging was surprising and clever, and because, let’s face it, we weren’t doing much of anything else anyway, Oreo managed to gain a foothold in our collective consciousness for a few precious seconds. So, marketers being marketers, we all stumbled over ourselves to proclaim a new channel and launch a series of new micro-attacks on consumers. That’s where the 6 seconds came from. Apparently, that’s the secret to storming the walls. Five seconds and you’re golden. Seven seconds and you’re dead.

Oreo surprised us, and it wasn’t because the message was 6 seconds long. It was because we weren’t expecting a highly relevant, highly timely message. Humans are built to respond to things that don’t fit within our expected patterns. The whole approach of marketing is to constantly blanket us with untimely, irrelevant messages. Marketers, to be fair, try to deliver the right message at the right time to the right person, but it’s really hard to do that. So, we overcompensate by delivering lots of messages all the time to everyone, hoping to get lucky. Not to take anything away from the cleverness and nimbleness of the Oreo campaign, but they got lucky. We were surprised and we let our defenses down long enough to be amused and entertained. Real time marketing wasn’t a brilliant new channel; it was a shot in the dark – literally.

And there’s no six-second gold standard of engagement. If you can deliver the right message at the right time to the right person, you can spend hours talking to your prospective customer.  It’s only when you’re trying to interrupt someone with something irrelevant that you have to hopefully shoehorn it into their consciousness. Think of it like a Maslow’s hierarchy of advertising effectiveness.  At it’s best advertising should be useful. This sits at the top of the pyramid. After usefulness comes relevance – even if I don’t find the ad useful to me right now, at least you’re talking to the right person. After relevance comes entertainment – I’ll willingly give you a few seconds of my time if I find your message amusing or emotionally engaging.  I may not buy, but I’ll spend some time with you. After entertainment comes the category the majority of advertising falls into – a total waste of my time.  Not useful, irrelevant, not emotionally engaging. And making an ad that falls into this category 5 seconds long, no matter what channel it’s delivered through, won’t change that. You may fool me once, but next time, I’m still going to ignore you.

There was something important happening during the Oreo campaign at the 2013 Super Bowl, but it had nothing to do with some new magic formula, some recently discovered loophole in our cognitive defenses. It was a sign of what may, hopefully, emerge as trend in advertising – nimble, responsive marketing that establishes a true feedback loop with prospects. What may have happened when the lights went out in New Orleans is that we may have found a new, very potent way to make sense of our market and establish a truly interactive, responsive dialogue with them. If this is the case, we may have just found a way climb a rung or two on the Advertising Effectiveness Hierarchy.